Differentially Private Federated Learning: A Systematic Review

Differentially Private Federated Learning: A Systematic Review

May 2024 | JIE FU, YUAN HONG, XINPENG LING, LEIXIA WANG, XUN RAN, ZHIYU SUN, WENDY HUI WANG, ZHILI CHEN, YANG CAO
This paper presents a systematic review of differentially private federated learning (DP-FL), addressing the lack of comprehensive categorization and synthesis of existing studies in this area. The authors propose a new taxonomy of DP-FL based on the definitions and guarantees of differential privacy models and federated learning scenarios. They classify DP-FL into three main categories: differential privacy (DP), local differential privacy (LDP), and the shuffle model. The taxonomy allows for a clear distinction between the protected objects and privacy levels across different models in federated learning environments. The authors also explore the applications of DP in federated learning scenarios, providing insights into privacy-preserving federated learning and suggesting practical directions for future research. The paper discusses the technical aspects of federated learning and three DP models, including their definitions, relationships, and properties. It also covers fundamental perturbation mechanisms used in DP-FL, such as the Gaussian, Laplace, and Skellam mechanisms. The authors summarize over 70 recent articles on DP-FL, discussing their applications in various domains, including horizontal, vertical, and transfer federated learning. The paper concludes with five promising directions for future research in DP-FL.This paper presents a systematic review of differentially private federated learning (DP-FL), addressing the lack of comprehensive categorization and synthesis of existing studies in this area. The authors propose a new taxonomy of DP-FL based on the definitions and guarantees of differential privacy models and federated learning scenarios. They classify DP-FL into three main categories: differential privacy (DP), local differential privacy (LDP), and the shuffle model. The taxonomy allows for a clear distinction between the protected objects and privacy levels across different models in federated learning environments. The authors also explore the applications of DP in federated learning scenarios, providing insights into privacy-preserving federated learning and suggesting practical directions for future research. The paper discusses the technical aspects of federated learning and three DP models, including their definitions, relationships, and properties. It also covers fundamental perturbation mechanisms used in DP-FL, such as the Gaussian, Laplace, and Skellam mechanisms. The authors summarize over 70 recent articles on DP-FL, discussing their applications in various domains, including horizontal, vertical, and transfer federated learning. The paper concludes with five promising directions for future research in DP-FL.
Reach us at info@study.space
[slides] Differentially Private Federated Learning%3A A Systematic Review | StudySpace