2023 | Eleni Triantafillou, Peter Kairouz, Fabian Pedregosa, Jamie Hayes, Meghdad Kurmanji, Kairan Zhao, Vincent Dumoulin, Julio Jacques Junior, Ioannis Mitliagkas, Jun Wan, Lisheng Sun Hosoya, Sergio Escalera, Gintare Karolina Dziugaite, Peter Triantafillou, Isabelle Guyon
The first NeurIPS unlearning competition aimed to advance research on unlearning, which involves removing the influence of specific training data from a model. The competition attracted over 1,200 teams and generated a wide range of novel solutions. The evaluation framework used measures forgetting quality based on a formal definition of unlearning, incorporating model utility for a holistic assessment. The framework assesses the effectiveness of different unlearning algorithms in terms of their computational cost and performance. The results show that leading methods perform well under the evaluation framework, indicating progress in unlearning. The competition also highlighted the importance of benchmarking and the trade-offs between forgetting quality, model utility, and efficiency. The evaluation methodology used in the competition provides a principled way to assess unlearning algorithms, and the findings suggest that there is ongoing progress in this area. The competition results demonstrate that some algorithms outperform existing ones, and the evaluation framework helps to standardize the assessment of unlearning methods. The study also discusses the generalizability of different algorithms across datasets and the importance of considering the trade-offs between forgetting quality and model utility. Overall, the competition highlights the need for further research and development in unlearning to improve the efficiency and effectiveness of unlearning algorithms.The first NeurIPS unlearning competition aimed to advance research on unlearning, which involves removing the influence of specific training data from a model. The competition attracted over 1,200 teams and generated a wide range of novel solutions. The evaluation framework used measures forgetting quality based on a formal definition of unlearning, incorporating model utility for a holistic assessment. The framework assesses the effectiveness of different unlearning algorithms in terms of their computational cost and performance. The results show that leading methods perform well under the evaluation framework, indicating progress in unlearning. The competition also highlighted the importance of benchmarking and the trade-offs between forgetting quality, model utility, and efficiency. The evaluation methodology used in the competition provides a principled way to assess unlearning algorithms, and the findings suggest that there is ongoing progress in this area. The competition results demonstrate that some algorithms outperform existing ones, and the evaluation framework helps to standardize the assessment of unlearning methods. The study also discusses the generalizability of different algorithms across datasets and the importance of considering the trade-offs between forgetting quality and model utility. Overall, the competition highlights the need for further research and development in unlearning to improve the efficiency and effectiveness of unlearning algorithms.