Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

2020 | Minghong Fang*1, Xiaoyu Cao*2, Jinyuan Jia2, Neil Zhenqiang Gong2
This paper investigates local model poisoning attacks against Byzantine-robust federated learning (FL). The authors propose a novel attack that manipulates local model parameters on compromised worker devices during the FL process to increase the testing error rate of the global model. They evaluate their attacks on four recent Byzantine-robust FL methods: Krum, Bulyan, trimmed mean, and median. Their results show that their attacks significantly increase the error rates of the global models, even for methods claimed to be robust against Byzantine failures. They also generalize two existing defenses against data poisoning attacks to defend against their local model poisoning attacks. However, these defenses are not always effective, highlighting the need for new defenses. The authors find that their attacks are more effective than existing data poisoning attacks, such as label flipping and back-gradient optimization attacks. They also show that the effectiveness of their attacks depends on the number of compromised worker devices and the aggregation rule used. The results suggest that Byzantine-robust FL may be the best option in scenarios where users' training data can only be stored on their edge/mobile devices and there may exist attacks, even though its error rate is higher than centralized learning.This paper investigates local model poisoning attacks against Byzantine-robust federated learning (FL). The authors propose a novel attack that manipulates local model parameters on compromised worker devices during the FL process to increase the testing error rate of the global model. They evaluate their attacks on four recent Byzantine-robust FL methods: Krum, Bulyan, trimmed mean, and median. Their results show that their attacks significantly increase the error rates of the global models, even for methods claimed to be robust against Byzantine failures. They also generalize two existing defenses against data poisoning attacks to defend against their local model poisoning attacks. However, these defenses are not always effective, highlighting the need for new defenses. The authors find that their attacks are more effective than existing data poisoning attacks, such as label flipping and back-gradient optimization attacks. They also show that the effectiveness of their attacks depends on the number of compromised worker devices and the aggregation rule used. The results suggest that Byzantine-robust FL may be the best option in scenarios where users' training data can only be stored on their edge/mobile devices and there may exist attacks, even though its error rate is higher than centralized learning.
Reach us at info@study.space