On Evaluating Adversarial Robustness

On Evaluating Adversarial Robustness

20 Feb 2019 | Nicholas Carlini1, Anish Athalye2, Nicolas Papernot1, Wieland Brendel3, Jonas Rauber3, Dimitris Tsipras2, Ian Goodfellow1, Aleksander Mądry2, Alexey Kurakin1*
This paper evaluates the challenges in assessing the robustness of machine learning systems against adversarial examples. It discusses the methodological foundations of evaluating defenses, highlights common pitfalls in evaluation practices, and proposes a checklist to avoid these issues. The paper emphasizes the importance of defining clear threat models, considering adaptive adversaries, and ensuring that evaluations are reproducible and rigorous. It also stresses the need for defenses to be robust against a wide range of attacks, including those that are not directly accessible to the adversary. The paper advocates for releasing pre-trained models and source code to ensure transparency and reproducibility. It also discusses the importance of evaluating defenses under the specific threat model they claim to be robust against, and highlights the need for careful analysis of attack success rates and the impact of different perturbation budgets. The paper concludes that evaluating adversarial robustness is a complex task that requires a thorough and methodical approach to ensure that the results are reliable and meaningful.This paper evaluates the challenges in assessing the robustness of machine learning systems against adversarial examples. It discusses the methodological foundations of evaluating defenses, highlights common pitfalls in evaluation practices, and proposes a checklist to avoid these issues. The paper emphasizes the importance of defining clear threat models, considering adaptive adversaries, and ensuring that evaluations are reproducible and rigorous. It also stresses the need for defenses to be robust against a wide range of attacks, including those that are not directly accessible to the adversary. The paper advocates for releasing pre-trained models and source code to ensure transparency and reproducibility. It also discusses the importance of evaluating defenses under the specific threat model they claim to be robust against, and highlights the need for careful analysis of attack success rates and the impact of different perturbation budgets. The paper concludes that evaluating adversarial robustness is a complex task that requires a thorough and methodical approach to ensure that the results are reliable and meaningful.
Reach us at info@study.space
[slides and audio] On Evaluating Adversarial Robustness