2024 | Nicolò Romandini*, Alessio Mora, Carlo Mazzocca, Rebecca Montanari, Paolo Bellavista
This paper provides a comprehensive survey on Federated Unlearning (FU), a novel approach to address the right to be forgotten in Federated Learning (FL). FL enables collaborative training of machine learning models across multiple parties while preserving user privacy by keeping data locally. However, ensuring the right to be forgotten, which allows participants to remove their data contributions from the model, remains unclear. The paper highlights the need for FU algorithms that can efficiently remove specific client contributions without retraining the entire model.
The survey covers background concepts, empirical evidence, and practical guidelines for designing efficient FU schemes. It includes a detailed analysis of evaluation metrics for unlearning in FL and presents a literature review categorizing state-of-the-art FU contributions under a novel taxonomy. The paper also identifies open technical challenges and discusses promising research directions.
Key contributions of the survey include:
- Introducing the background concepts and motivations for FU algorithms.
- Providing experimental evidence demonstrating that FL global models can retain information about specific client data even after the client's data is no longer used.
- Offering comprehensive guidelines for designing and implementing FU algorithms, including the requirements they must meet and the metrics used for evaluation.
- Conducting a thorough review of existing FU literature, categorizing contributions based on objectives and metrics.
- Identifying open problems and discussing future research directions.
The paper aims to serve as a valuable resource for researchers seeking to understand FU and its recent advancements, as well as a practical guide for designing and implementing novel FU solutions.This paper provides a comprehensive survey on Federated Unlearning (FU), a novel approach to address the right to be forgotten in Federated Learning (FL). FL enables collaborative training of machine learning models across multiple parties while preserving user privacy by keeping data locally. However, ensuring the right to be forgotten, which allows participants to remove their data contributions from the model, remains unclear. The paper highlights the need for FU algorithms that can efficiently remove specific client contributions without retraining the entire model.
The survey covers background concepts, empirical evidence, and practical guidelines for designing efficient FU schemes. It includes a detailed analysis of evaluation metrics for unlearning in FL and presents a literature review categorizing state-of-the-art FU contributions under a novel taxonomy. The paper also identifies open technical challenges and discusses promising research directions.
Key contributions of the survey include:
- Introducing the background concepts and motivations for FU algorithms.
- Providing experimental evidence demonstrating that FL global models can retain information about specific client data even after the client's data is no longer used.
- Offering comprehensive guidelines for designing and implementing FU algorithms, including the requirements they must meet and the metrics used for evaluation.
- Conducting a thorough review of existing FU literature, categorizing contributions based on objectives and metrics.
- Identifying open problems and discussing future research directions.
The paper aims to serve as a valuable resource for researchers seeking to understand FU and its recent advancements, as well as a practical guide for designing and implementing novel FU solutions.