Byzantine-Robust Decentralized Federated Learning

Byzantine-Robust Decentralized Federated Learning

October 14–18, 2024 | Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Gong
Byzantine-Robust Decentralized Federated Learning Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private training data. In conventional FL, the system follows the server-assisted architecture, where the training process is coordinated by a central server. However, the server-assisted FL framework suffers from poor scalability due to a communication bottleneck at the server and trust dependency issues. To address these challenges, decentralized federated learning (DFL) has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner. However, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients. To date, only a limited number of Byzantine-robust DFL methods have been proposed, most of which are either communication-inefficient or remain vulnerable to advanced poisoning attacks. In this paper, we propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL. In BALANCE, each client leverages its own local model as a similarity reference to determine if the received model is malicious or benign. We establish the theoretical convergence guarantee for BALANCE under poisoning attacks in both strongly convex and non-convex settings. Furthermore, the convergence rate of BALANCE under poisoning attacks matches those of the state-of-the-art counterparts in Byzantine-free settings. Extensive experiments also demonstrate that BALANCE outperforms existing DFL methods and effectively defends against poisoning attacks. BALANCE is a novel approach to defend against poisoning attacks in DFL. In contrast to existing DFL defenses, our BALANCE algorithm achieves the same communication complexity as that of the state-of-the-art server-assisted FL algorithms. We theoretically establish the convergence rate performance of BALANCE under poisoning attacks in both strongly convex and non-convex settings. We note that the convergence rate performance of BALANCE under strongly convex and non-convex settings match the optimal convergence rates in Byzantine-free strongly convex and non-convex optimizations, respectively. Our extensive experiments on different benchmark datasets, various poisoning attacks and practical DFL settings demonstrate and verify the efficacy of our proposed BALANCE method.Byzantine-Robust Decentralized Federated Learning Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private training data. In conventional FL, the system follows the server-assisted architecture, where the training process is coordinated by a central server. However, the server-assisted FL framework suffers from poor scalability due to a communication bottleneck at the server and trust dependency issues. To address these challenges, decentralized federated learning (DFL) has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner. However, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients. To date, only a limited number of Byzantine-robust DFL methods have been proposed, most of which are either communication-inefficient or remain vulnerable to advanced poisoning attacks. In this paper, we propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL. In BALANCE, each client leverages its own local model as a similarity reference to determine if the received model is malicious or benign. We establish the theoretical convergence guarantee for BALANCE under poisoning attacks in both strongly convex and non-convex settings. Furthermore, the convergence rate of BALANCE under poisoning attacks matches those of the state-of-the-art counterparts in Byzantine-free settings. Extensive experiments also demonstrate that BALANCE outperforms existing DFL methods and effectively defends against poisoning attacks. BALANCE is a novel approach to defend against poisoning attacks in DFL. In contrast to existing DFL defenses, our BALANCE algorithm achieves the same communication complexity as that of the state-of-the-art server-assisted FL algorithms. We theoretically establish the convergence rate performance of BALANCE under poisoning attacks in both strongly convex and non-convex settings. We note that the convergence rate performance of BALANCE under strongly convex and non-convex settings match the optimal convergence rates in Byzantine-free strongly convex and non-convex optimizations, respectively. Our extensive experiments on different benchmark datasets, various poisoning attacks and practical DFL settings demonstrate and verify the efficacy of our proposed BALANCE method.
Reach us at info@study.space