Federated Learning with Differential Privacy: Algorithms and Performance Analysis

Federated Learning with Differential Privacy: Algorithms and Performance Analysis

8 Nov 2019 | Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farokhi Farhad, Shi Jin, Tony Q. S. Quek, H. Vincent Poor
This paper proposes a novel framework for federated learning (FL) with differential privacy (DP), called Noising before Model Aggregation FL (NbAFL), to prevent information leakage during the training process. The framework introduces artificial noises to the parameters at the client side before aggregation, ensuring DP compliance. The authors theoretically analyze the convergence performance of the FL model under different privacy protection levels and demonstrate that there is a tradeoff between convergence performance and privacy protection. They also propose a K-random scheduling strategy, where a subset of clients participate in each aggregation, which maintains the key properties of the NbAFL framework. Theoretical convergence bounds are derived, showing that increasing the number of clients or the number of aggregation rounds can improve convergence performance, while an optimal number of aggregation rounds exists for a given privacy level. The authors evaluate their approach on real-world datasets and show that their theoretical results align with simulations, demonstrating the effectiveness of NbAFL in preserving privacy while maintaining convergence performance. The results highlight the importance of balancing privacy and convergence in FL systems.This paper proposes a novel framework for federated learning (FL) with differential privacy (DP), called Noising before Model Aggregation FL (NbAFL), to prevent information leakage during the training process. The framework introduces artificial noises to the parameters at the client side before aggregation, ensuring DP compliance. The authors theoretically analyze the convergence performance of the FL model under different privacy protection levels and demonstrate that there is a tradeoff between convergence performance and privacy protection. They also propose a K-random scheduling strategy, where a subset of clients participate in each aggregation, which maintains the key properties of the NbAFL framework. Theoretical convergence bounds are derived, showing that increasing the number of clients or the number of aggregation rounds can improve convergence performance, while an optimal number of aggregation rounds exists for a given privacy level. The authors evaluate their approach on real-world datasets and show that their theoretical results align with simulations, demonstrating the effectiveness of NbAFL in preserving privacy while maintaining convergence performance. The results highlight the importance of balancing privacy and convergence in FL systems.
Reach us at info@study.space