The paper "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning" by Battista Biggio and Fabio Roli provides a comprehensive overview of the evolution of adversarial machine learning over the last decade. The authors trace the development from early work on the security of non-deep learning algorithms to more recent studies on deep learning algorithms in computer vision and cybersecurity tasks. They highlight the connections between seemingly different lines of research and common misconceptions related to the security evaluation of machine learning algorithms.
The paper emphasizes the importance of a proactive security-by-design approach, advocating for a cycle that explicitly accounts for the attacker's presence. It reviews the concept of arms races in computer security, distinguishing between reactive and proactive strategies. The authors discuss the three golden rules of proactive security: knowing the adversary, being proactive, and protecting oneself.
The paper also delves into the modeling of threats against learning-based systems, including the attacker's goal, knowledge, capability, and strategy. It introduces a threat model that defines different attack scenarios and corresponding optimization problems for crafting adversarial examples. The authors provide a detailed categorization of attacks against machine learning, such as evasion and poisoning attacks, and discuss the impact of varying levels of attacker knowledge and attack strength on the system's security.
Additionally, the paper explores the simulation of test-time evasion and training-time poisoning attacks, and the design of defense mechanisms to mitigate these threats. It concludes by discussing the limitations of current work and future research challenges in the field of adversarial machine learning.The paper "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning" by Battista Biggio and Fabio Roli provides a comprehensive overview of the evolution of adversarial machine learning over the last decade. The authors trace the development from early work on the security of non-deep learning algorithms to more recent studies on deep learning algorithms in computer vision and cybersecurity tasks. They highlight the connections between seemingly different lines of research and common misconceptions related to the security evaluation of machine learning algorithms.
The paper emphasizes the importance of a proactive security-by-design approach, advocating for a cycle that explicitly accounts for the attacker's presence. It reviews the concept of arms races in computer security, distinguishing between reactive and proactive strategies. The authors discuss the three golden rules of proactive security: knowing the adversary, being proactive, and protecting oneself.
The paper also delves into the modeling of threats against learning-based systems, including the attacker's goal, knowledge, capability, and strategy. It introduces a threat model that defines different attack scenarios and corresponding optimization problems for crafting adversarial examples. The authors provide a detailed categorization of attacks against machine learning, such as evasion and poisoning attacks, and discuss the impact of varying levels of attacker knowledge and attack strength on the system's security.
Additionally, the paper explores the simulation of test-time evasion and training-time poisoning attacks, and the design of defense mechanisms to mitigate these threats. It concludes by discussing the limitations of current work and future research challenges in the field of adversarial machine learning.