Byzantine-Robust Decentralized Federated Learning

Byzantine-Robust Decentralized Federated Learning

October 14–18, 2024 | Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Gong
The paper "Byzantine-Robust Decentralized Federated Learning" addresses the challenges of decentralized federated learning (DFL) in the context of poisoning attacks. DFL, which allows clients to collaboratively train machine learning models without sharing raw data, is vulnerable to attacks where malicious clients manipulate the system by sending crafted local models to their neighbors. Traditional server-assisted FL frameworks suffer from scalability issues and trust dependency problems, leading to the development of DFL. However, existing DFL methods often lack communication efficiency or are not robust against advanced poisoning attacks. To tackle these issues, the authors propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization). BALANCE leverages each client's local model as a similarity reference to determine if the received model is malicious or benign. The algorithm is designed to be communication-efficient and provides theoretical guarantees of convergence under both strongly convex and non-convex settings. The convergence rates of BALANCE match those of state-of-the-art Byzantine-free methods. Extensive experiments on various datasets, poisoning attacks, and practical DFL settings demonstrate the effectiveness of BALANCE. The method outperforms existing DFL methods and effectively defends against poisoning attacks, achieving competitive learning performance and Byzantine robustness. The paper also discusses the impact of parameters and the communication costs of different methods, showing that BALANCE is both efficient and resilient.The paper "Byzantine-Robust Decentralized Federated Learning" addresses the challenges of decentralized federated learning (DFL) in the context of poisoning attacks. DFL, which allows clients to collaboratively train machine learning models without sharing raw data, is vulnerable to attacks where malicious clients manipulate the system by sending crafted local models to their neighbors. Traditional server-assisted FL frameworks suffer from scalability issues and trust dependency problems, leading to the development of DFL. However, existing DFL methods often lack communication efficiency or are not robust against advanced poisoning attacks. To tackle these issues, the authors propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization). BALANCE leverages each client's local model as a similarity reference to determine if the received model is malicious or benign. The algorithm is designed to be communication-efficient and provides theoretical guarantees of convergence under both strongly convex and non-convex settings. The convergence rates of BALANCE match those of state-of-the-art Byzantine-free methods. Extensive experiments on various datasets, poisoning attacks, and practical DFL settings demonstrate the effectiveness of BALANCE. The method outperforms existing DFL methods and effectively defends against poisoning attacks, achieving competitive learning performance and Byzantine robustness. The paper also discusses the impact of parameters and the communication costs of different methods, showing that BALANCE is both efficient and resilient.
Reach us at info@study.space