SVFL: Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning

SVFL: Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning

Vol. 23, No. 1, January 2024 | Fucai Luo, Saif Al-Kuwari, and Yong Ding
The paper introduces SVFL, an efficient protocol for cross-silo federated learning (FL) that supports secure gradient aggregation and verification. SVFL aims to address the main security issues in FL, such as the privacy of gradients and the correctness verification of the aggregated gradient. To achieve this, SVFL replaces heavy homomorphic encryption (HE) operations with a simple masking technique called Masking with One-Time Pads (MOTP). This reduces computational and communication overheads. Additionally, SVFL employs a secure homomorphic network coding signature scheme (HNSig) to verify the correctness of the aggregated gradient. The paper provides a comprehensive security analysis, demonstrating that SVFL ensures the privacy of local gradients and the trained model, as well as the verifiability of the aggregated gradient. Complexity analysis and experimental evaluations show that SVFL maintains low computational and communication overheads, even on large datasets, with a negligible accuracy loss (less than 1%). Experimental comparisons with existing FL protocols, including BatchCrypt, VFL, VerifyNet, and VeriFL, demonstrate significant efficiency improvements in both computation and communication.The paper introduces SVFL, an efficient protocol for cross-silo federated learning (FL) that supports secure gradient aggregation and verification. SVFL aims to address the main security issues in FL, such as the privacy of gradients and the correctness verification of the aggregated gradient. To achieve this, SVFL replaces heavy homomorphic encryption (HE) operations with a simple masking technique called Masking with One-Time Pads (MOTP). This reduces computational and communication overheads. Additionally, SVFL employs a secure homomorphic network coding signature scheme (HNSig) to verify the correctness of the aggregated gradient. The paper provides a comprehensive security analysis, demonstrating that SVFL ensures the privacy of local gradients and the trained model, as well as the verifiability of the aggregated gradient. Complexity analysis and experimental evaluations show that SVFL maintains low computational and communication overheads, even on large datasets, with a negligible accuracy loss (less than 1%). Experimental comparisons with existing FL protocols, including BatchCrypt, VFL, VerifyNet, and VeriFL, demonstrate significant efficiency improvements in both computation and communication.
Reach us at info@study.space
Understanding SVFL%3A Efficient Secure Aggregation and Verification for Cross-Silo Federated Learning