How To Backdoor Federated Learning

How To Backdoor Federated Learning

6 Aug 2019 | Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, Vitaly Shmatikov
Federated learning enables thousands of participants to train a deep learning model without sharing private data. However, it is vulnerable to model-poisoning attacks, where malicious participants can introduce backdoor functionality into the joint model. This paper shows that a single participant can replace the joint model with a malicious one that performs a backdoor task accurately while maintaining high performance on the main task. The attack exploits the fact that federated learning allows participants to directly influence the joint model's weights and train in ways that benefit the attack. The paper evaluates the effectiveness of model replacement on image classification and word prediction tasks, showing that it outperforms traditional data poisoning. The attack is effective even when the malicious participant controls fewer than 1% of the participants. The paper also demonstrates that model replacement can evade anomaly detection by incorporating evasion techniques into the attacker's loss function. The results show that the backdoor remains in the model for many rounds, even after the attacker is no longer selected. The paper also evaluates the effectiveness of different scaling factors and shows that larger scaling factors can increase backdoor accuracy but may also increase the distance between the submitted model and the global model. The paper concludes that federated learning is generically vulnerable to backdoor attacks and that secure aggregation cannot prevent these attacks.Federated learning enables thousands of participants to train a deep learning model without sharing private data. However, it is vulnerable to model-poisoning attacks, where malicious participants can introduce backdoor functionality into the joint model. This paper shows that a single participant can replace the joint model with a malicious one that performs a backdoor task accurately while maintaining high performance on the main task. The attack exploits the fact that federated learning allows participants to directly influence the joint model's weights and train in ways that benefit the attack. The paper evaluates the effectiveness of model replacement on image classification and word prediction tasks, showing that it outperforms traditional data poisoning. The attack is effective even when the malicious participant controls fewer than 1% of the participants. The paper also demonstrates that model replacement can evade anomaly detection by incorporating evasion techniques into the attacker's loss function. The results show that the backdoor remains in the model for many rounds, even after the attacker is no longer selected. The paper also evaluates the effectiveness of different scaling factors and shows that larger scaling factors can increase backdoor accuracy but may also increase the distance between the submitted model and the global model. The paper concludes that federated learning is generically vulnerable to backdoor attacks and that secure aggregation cannot prevent these attacks.
Reach us at info@study.space
[slides] How To Backdoor Federated Learning | StudySpace