A Hybrid Approach to Privacy-Preserving Federated Learning

A Hybrid Approach to Privacy-Preserving Federated Learning

November 15, 2019 | Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, Yi Zhou
A hybrid approach to privacy-preserving federated learning combines differential privacy (DP) and secure multiparty computation (SMC) to balance privacy and model accuracy. The system ensures data privacy by preventing inference over exchanged messages and the final model while maintaining acceptable predictive accuracy. It addresses the limitations of existing methods, such as SMC being vulnerable to inference and DP leading to low accuracy with many parties. The proposed approach uses a customizable trust threshold to manage collusion risks and reduces noise injection as the number of parties increases, preserving privacy and accuracy. Experimental results show that the system outperforms state-of-the-art solutions in training decision trees, convolutional neural networks (CNNs), and linear support vector machines (SVMs). The system is scalable, secure, and effective in protecting data privacy while enabling collaborative model training across distributed datasets. It uses homomorphic encryption and DP to ensure privacy, with the ability to handle various trust scenarios and model types. The approach provides formal privacy guarantees and improved accuracy compared to existing privacy-preserving methods.A hybrid approach to privacy-preserving federated learning combines differential privacy (DP) and secure multiparty computation (SMC) to balance privacy and model accuracy. The system ensures data privacy by preventing inference over exchanged messages and the final model while maintaining acceptable predictive accuracy. It addresses the limitations of existing methods, such as SMC being vulnerable to inference and DP leading to low accuracy with many parties. The proposed approach uses a customizable trust threshold to manage collusion risks and reduces noise injection as the number of parties increases, preserving privacy and accuracy. Experimental results show that the system outperforms state-of-the-art solutions in training decision trees, convolutional neural networks (CNNs), and linear support vector machines (SVMs). The system is scalable, secure, and effective in protecting data privacy while enabling collaborative model training across distributed datasets. It uses homomorphic encryption and DP to ensure privacy, with the ability to handle various trust scenarios and model types. The approach provides formal privacy guarantees and improved accuracy compared to existing privacy-preserving methods.
Reach us at info@study.space
[slides and audio] A Hybrid Approach to Privacy-Preserving Federated Learning