Communication Efficient and Provable Federated Unlearning

Communication Efficient and Provable Federated Unlearning

2024 | Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang
Communication Efficient and Provable Federated Unlearning This paper introduces a novel framework for exact federated unlearning that ensures both communication efficiency and provable unlearning. The framework addresses the challenge of removing the impact of specific clients or data points from a global model learned via federated learning (FL), which is essential for privacy and data management. The key contribution is the definition of exact federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data. The framework leverages total variation (TV) stability, a property that measures the sensitivity of model parameters to slight changes in the dataset. This allows for efficient unlearning by modifying the classical FedAvg algorithm to incorporate TV stability and using local SGD with periodic averaging to reduce communication rounds. The framework also provides efficient unlearning algorithms for both client-level and sample-level unlearning. Theoretical guarantees are provided for the learning and unlearning algorithms, proving that they achieve exact federated unlearning with reasonable convergence rates. Empirical validation on six benchmark datasets shows that the framework outperforms state-of-the-art methods in terms of accuracy, communication cost, computation cost, and unlearning efficacy. The framework is designed to be efficient and scalable, making it suitable for practical applications in federated learning.Communication Efficient and Provable Federated Unlearning This paper introduces a novel framework for exact federated unlearning that ensures both communication efficiency and provable unlearning. The framework addresses the challenge of removing the impact of specific clients or data points from a global model learned via federated learning (FL), which is essential for privacy and data management. The key contribution is the definition of exact federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data. The framework leverages total variation (TV) stability, a property that measures the sensitivity of model parameters to slight changes in the dataset. This allows for efficient unlearning by modifying the classical FedAvg algorithm to incorporate TV stability and using local SGD with periodic averaging to reduce communication rounds. The framework also provides efficient unlearning algorithms for both client-level and sample-level unlearning. Theoretical guarantees are provided for the learning and unlearning algorithms, proving that they achieve exact federated unlearning with reasonable convergence rates. Empirical validation on six benchmark datasets shows that the framework outperforms state-of-the-art methods in terms of accuracy, communication cost, computation cost, and unlearning efficacy. The framework is designed to be efficient and scalable, making it suitable for practical applications in federated learning.
Reach us at info@study.space