Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance

Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance

2024 | WENQI WEI, Fordham University, New York City, NY, USA; LING LIU, Georgia Institute of Technology, Atlanta, GA, USA
The paper "Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance" by Wenqi Wei and Ling Liu reviews techniques, algorithms, and theoretical foundations for ensuring trustworthy distributed AI systems. The authors highlight the growing economic and societal impact of emerging distributed AI systems, while also identifying security, privacy, and fairness issues that these systems face. They provide an overview of alternative architectures for distributed learning, discuss inherent vulnerabilities, and present a taxonomy of countermeasures for robustness, privacy protection, and fairness awareness. The paper covers robustness against evasion attacks, poisoning attacks, Byzantine attacks, and irregular data distribution, as well as privacy protection during model training and deployment. It also addresses AI fairness and governance, emphasizing the importance of data and model governance. The authors conclude with a discussion on open challenges and future research directions, including the need for trustworthy AI policy guidelines, co-design of responsibility-utility, and incentives and compliance. The paper aims to guide researchers and practitioners in building trustworthy distributed AI systems by providing a comprehensive review of the field and a structured roadmap for addressing vulnerabilities and countermeasures.The paper "Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance" by Wenqi Wei and Ling Liu reviews techniques, algorithms, and theoretical foundations for ensuring trustworthy distributed AI systems. The authors highlight the growing economic and societal impact of emerging distributed AI systems, while also identifying security, privacy, and fairness issues that these systems face. They provide an overview of alternative architectures for distributed learning, discuss inherent vulnerabilities, and present a taxonomy of countermeasures for robustness, privacy protection, and fairness awareness. The paper covers robustness against evasion attacks, poisoning attacks, Byzantine attacks, and irregular data distribution, as well as privacy protection during model training and deployment. It also addresses AI fairness and governance, emphasizing the importance of data and model governance. The authors conclude with a discussion on open challenges and future research directions, including the need for trustworthy AI policy guidelines, co-design of responsibility-utility, and incentives and compliance. The paper aims to guide researchers and practitioners in building trustworthy distributed AI systems by providing a comprehensive review of the field and a structured roadmap for addressing vulnerabilities and countermeasures.
Reach us at info@study.space
[slides and audio] Trustworthy Distributed AI Systems%3A Robustness%2C Privacy%2C and Governance