August 2021 | Nicolò Romandini, Alessio Mora, Carlo Mazzocca, Rebecca Montanari, Paolo Bellavista
This paper presents a survey on Federated Unlearning (FU), a critical area in Federated Learning (FL) that enables the removal of specific clients' contributions from the global model without full retraining. The paper addresses the challenge of ensuring the right to be forgotten in FL, where clients can request the removal of their data contributions from the learned model. It also tackles the issue of malicious clients injecting backdoors into the global model through updates. The survey provides background concepts, empirical evidence, and practical guidelines for designing efficient FU schemes. It includes a detailed analysis of metrics for evaluating unlearning in FL and presents an in-depth literature review categorizing state-of-the-art FU contributions under a novel taxonomy. The paper outlines the most relevant and still open technical challenges, identifying the most promising research directions in the field.
The paper discusses the need for FU algorithms that can efficiently remove specific clients' contributions without full model retraining, ensuring that the "good" knowledge acquired after the unlearning point is not compromised. It highlights the challenges of unlearning in FL, including the decentralized and inscrutable nature of data, the iterative and stochastic nature of FL, and the need for effective metrics to assess the success of unlearning. The paper also explores the differences between FU and traditional Machine Unlearning (MU), emphasizing the unique challenges posed by FL's decentralized architecture.
The paper introduces various metrics for assessing the effectiveness of FU, including improved efficiency metrics, retained or recovered performance metrics, and forgetting verification metrics. These metrics help evaluate the success of unlearning in terms of performance, privacy, and the ability to remove harmful contributions from the global model. The paper also discusses the challenges of adapting MU in FL settings, including the non-deterministic training process, inscrutable model updates, finite memory, inscrutable local datasets, and iterative learning processes. It highlights the importance of effective unlearning in ensuring privacy and security in FL, particularly in the context of data regulations such as GDPR and CCPA. The paper concludes with a discussion of the most relevant and open technical challenges in the field, identifying the most promising research directions for future work.This paper presents a survey on Federated Unlearning (FU), a critical area in Federated Learning (FL) that enables the removal of specific clients' contributions from the global model without full retraining. The paper addresses the challenge of ensuring the right to be forgotten in FL, where clients can request the removal of their data contributions from the learned model. It also tackles the issue of malicious clients injecting backdoors into the global model through updates. The survey provides background concepts, empirical evidence, and practical guidelines for designing efficient FU schemes. It includes a detailed analysis of metrics for evaluating unlearning in FL and presents an in-depth literature review categorizing state-of-the-art FU contributions under a novel taxonomy. The paper outlines the most relevant and still open technical challenges, identifying the most promising research directions in the field.
The paper discusses the need for FU algorithms that can efficiently remove specific clients' contributions without full model retraining, ensuring that the "good" knowledge acquired after the unlearning point is not compromised. It highlights the challenges of unlearning in FL, including the decentralized and inscrutable nature of data, the iterative and stochastic nature of FL, and the need for effective metrics to assess the success of unlearning. The paper also explores the differences between FU and traditional Machine Unlearning (MU), emphasizing the unique challenges posed by FL's decentralized architecture.
The paper introduces various metrics for assessing the effectiveness of FU, including improved efficiency metrics, retained or recovered performance metrics, and forgetting verification metrics. These metrics help evaluate the success of unlearning in terms of performance, privacy, and the ability to remove harmful contributions from the global model. The paper also discusses the challenges of adapting MU in FL settings, including the non-deterministic training process, inscrutable model updates, finite memory, inscrutable local datasets, and iterative learning processes. It highlights the importance of effective unlearning in ensuring privacy and security in FL, particularly in the context of data regulations such as GDPR and CCPA. The paper concludes with a discussion of the most relevant and open technical challenges in the field, identifying the most promising research directions for future work.