This paper explores local model poisoning attacks in the context of Byzantine-robust federated learning, a method where multiple client devices collaboratively learn a machine learning model while maintaining privacy. The authors formulate these attacks as optimization problems and apply them to four recent Byzantine-robust federated learning methods. Their empirical results on real-world datasets show that their attacks can significantly increase the error rates of the models, even in scenarios where the models were claimed to be robust against Byzantine failures. The paper also generalizes two defenses for data poisoning attacks to defend against local model poisoning attacks, but finds that these defenses are not always effective, highlighting the need for new defenses against local model poisoning attacks in federated learning. The key contributions include the first systematic study on local model poisoning attacks in Byzantine-robust federated learning, the formulation of these attacks as optimization problems, and the evaluation of existing defenses.This paper explores local model poisoning attacks in the context of Byzantine-robust federated learning, a method where multiple client devices collaboratively learn a machine learning model while maintaining privacy. The authors formulate these attacks as optimization problems and apply them to four recent Byzantine-robust federated learning methods. Their empirical results on real-world datasets show that their attacks can significantly increase the error rates of the models, even in scenarios where the models were claimed to be robust against Byzantine failures. The paper also generalizes two defenses for data poisoning attacks to defend against local model poisoning attacks, but finds that these defenses are not always effective, highlighting the need for new defenses against local model poisoning attacks in federated learning. The key contributions include the first systematic study on local model poisoning attacks in Byzantine-robust federated learning, the formulation of these attacks as optimization problems, and the evaluation of existing defenses.