Anomaly detection and defense techniques in federated learning: a comprehensive review

Anomaly detection and defense techniques in federated learning: a comprehensive review

23 May 2024 | Chang Zhang¹ · Shunkun Yang² · Lingfeng Mao¹ · Huansheng Ning¹
This paper provides a comprehensive review of anomaly detection and defense techniques in federated learning (FL), focusing on security and privacy issues. FL is a decentralized machine learning approach that allows clients to train models without sharing raw data, thus protecting user privacy. However, FL systems are vulnerable to various security and privacy attacks, including data poisoning, model poisoning, backdoor attacks, Byzantine attacks, Sybil attacks, free-riding attacks, and inference attacks. These attacks can compromise the integrity, accuracy, and privacy of FL models. The paper categorizes these anomalies from the perspectives of clients, servers, and communication processes, and discusses existing detection and defense methods. It also addresses the challenges of non-independent identically distributed (non-IID) data in FL and summarizes related research progress. The paper aims to provide a systematic and comprehensive review of security and privacy research in FL to help understand the progress and better apply FL in additional scenarios. The review highlights the importance of robust defense mechanisms, such as secure communication, data encryption, anomaly detection, and adversarial robustness techniques, to mitigate the risks posed by FL attacks. The paper also discusses the limitations of existing surveys and proposes a novel classification of anomaly detection and defense in FL to facilitate timely identification and defense measures. The review emphasizes the need for further research to address the challenges of non-IID data and improve the security and privacy of FL systems.This paper provides a comprehensive review of anomaly detection and defense techniques in federated learning (FL), focusing on security and privacy issues. FL is a decentralized machine learning approach that allows clients to train models without sharing raw data, thus protecting user privacy. However, FL systems are vulnerable to various security and privacy attacks, including data poisoning, model poisoning, backdoor attacks, Byzantine attacks, Sybil attacks, free-riding attacks, and inference attacks. These attacks can compromise the integrity, accuracy, and privacy of FL models. The paper categorizes these anomalies from the perspectives of clients, servers, and communication processes, and discusses existing detection and defense methods. It also addresses the challenges of non-independent identically distributed (non-IID) data in FL and summarizes related research progress. The paper aims to provide a systematic and comprehensive review of security and privacy research in FL to help understand the progress and better apply FL in additional scenarios. The review highlights the importance of robust defense mechanisms, such as secure communication, data encryption, anomaly detection, and adversarial robustness techniques, to mitigate the risks posed by FL attacks. The paper also discusses the limitations of existing surveys and proposes a novel classification of anomaly detection and defense in FL to facilitate timely identification and defense measures. The review emphasizes the need for further research to address the challenges of non-IID data and improve the security and privacy of FL systems.
Reach us at info@futurestudyspace.com
Understanding Anomaly detection and defense techniques in federated learning%3A a comprehensive review