November 15, 2019 | Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, Yi Zhou
This paper presents a hybrid approach to privacy-preserving federated learning that combines differential privacy (DP) and secure multiparty computation (SMC). The authors address the limitations of existing federated learning systems, which either use SMC, which is vulnerable to inference, or DP, which can lead to low accuracy due to the large number of parties with small datasets. The proposed system aims to prevent inference over messages exchanged during training and the final trained model while maintaining acceptable predictive accuracy. It introduces a tunable trust parameter to account for various trust scenarios and ensures that the system can handle a scalable number of parties. The paper includes experimental results demonstrating the effectiveness of the approach in training decision trees, convolutional neural networks, and linear support vector machines, showing that it outperforms state-of-the-art solutions. The system is designed to be flexible and can be applied to a wide range of machine learning models, with a focus on balancing privacy, trust, and accuracy.This paper presents a hybrid approach to privacy-preserving federated learning that combines differential privacy (DP) and secure multiparty computation (SMC). The authors address the limitations of existing federated learning systems, which either use SMC, which is vulnerable to inference, or DP, which can lead to low accuracy due to the large number of parties with small datasets. The proposed system aims to prevent inference over messages exchanged during training and the final trained model while maintaining acceptable predictive accuracy. It introduces a tunable trust parameter to account for various trust scenarios and ensures that the system can handle a scalable number of parties. The paper includes experimental results demonstrating the effectiveness of the approach in training decision trees, convolutional neural networks, and linear support vector machines, showing that it outperforms state-of-the-art solutions. The system is designed to be flexible and can be applied to a wide range of machine learning models, with a focus on balancing privacy, trust, and accuracy.