SVFL: Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning

SVFL: Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning

January 2024 | Fucai Luo, Saif Al-Kuwari, and Yong Ding
SVFL is an efficient protocol for cross-silo federated learning (FL) that supports secure gradient aggregation and verification. The main challenges in FL include protecting the privacy of gradients and trained models, and verifying the correctness of aggregated gradients. Existing approaches often suffer from high computation and communication overheads. SVFL addresses these issues by replacing additively homomorphic encryption (HE) with a simple masking technique called masking with one-time pads (MOTP), which reduces overhead while preserving privacy. It also introduces an efficient verification mechanism using a secure homomorphic network coding signature scheme (HNSig) to ensure the correctness of aggregated gradients. SVFL achieves low computation and communication overheads with minimal accuracy loss (less than 1%). Experimental results show that SVFL outperforms existing FL protocols in both efficiency and accuracy. SVFL is suitable for cross-silo FL, where clients are organizations with sufficient computing resources and reliable communications. It supports secure aggregation and verification of aggregated gradients, ensuring the privacy of local gradients and trained models. The protocol is efficient, with a simple architecture and minimal overhead, making it suitable for large-scale FL applications. SVFL is also applicable to cross-device FL settings by using a Trusted Authority (TA) to initialize the model and generate parameters. The protocol ensures the correctness of aggregated gradients through a verification mechanism, and it is secure against active adversaries who may attempt to forge aggregated gradients. SVFL provides a secure and efficient solution for cross-silo FL, with strong privacy and correctness guarantees.SVFL is an efficient protocol for cross-silo federated learning (FL) that supports secure gradient aggregation and verification. The main challenges in FL include protecting the privacy of gradients and trained models, and verifying the correctness of aggregated gradients. Existing approaches often suffer from high computation and communication overheads. SVFL addresses these issues by replacing additively homomorphic encryption (HE) with a simple masking technique called masking with one-time pads (MOTP), which reduces overhead while preserving privacy. It also introduces an efficient verification mechanism using a secure homomorphic network coding signature scheme (HNSig) to ensure the correctness of aggregated gradients. SVFL achieves low computation and communication overheads with minimal accuracy loss (less than 1%). Experimental results show that SVFL outperforms existing FL protocols in both efficiency and accuracy. SVFL is suitable for cross-silo FL, where clients are organizations with sufficient computing resources and reliable communications. It supports secure aggregation and verification of aggregated gradients, ensuring the privacy of local gradients and trained models. The protocol is efficient, with a simple architecture and minimal overhead, making it suitable for large-scale FL applications. SVFL is also applicable to cross-device FL settings by using a Trusted Authority (TA) to initialize the model and generate parameters. The protocol ensures the correctness of aggregated gradients through a verification mechanism, and it is secure against active adversaries who may attempt to forge aggregated gradients. SVFL provides a secure and efficient solution for cross-silo FL, with strong privacy and correctness guarantees.
Reach us at info@study.space
Understanding SVFL%3A Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning