MITIGATING ADVERSARIAL EFFECTS THROUGH RANDOMIZATION

MITIGATING ADVERSARIAL EFFECTS THROUGH RANDOMIZATION

28 Feb 2018 | Cihang Xie, Zhishuai Zhang & Alan L. Yuille, Jianyu Wang, Zhou Ren
This paper addresses the vulnerability of Convolutional Neural Networks (CNNs) to adversarial examples, which are imperceptible perturbations that can cause these networks to fail. The authors propose a defense mechanism that uses randomization at inference time, specifically random resizing and random padding, to mitigate the effects of adversarial attacks. The method is designed to be effective against both single-step and iterative attacks, without requiring additional training or fine-tuning. Extensive experiments demonstrate that the proposed method significantly improves the robustness of CNNs to adversarial examples, achieving a normalized score of 0.924 in the NIPS 2017 adversarial examples defense challenge, outperforming adversarial training alone with a score of 0.773. The code for the method is publicly available.This paper addresses the vulnerability of Convolutional Neural Networks (CNNs) to adversarial examples, which are imperceptible perturbations that can cause these networks to fail. The authors propose a defense mechanism that uses randomization at inference time, specifically random resizing and random padding, to mitigate the effects of adversarial attacks. The method is designed to be effective against both single-step and iterative attacks, without requiring additional training or fine-tuning. Extensive experiments demonstrate that the proposed method significantly improves the robustness of CNNs to adversarial examples, achieving a normalized score of 0.924 in the NIPS 2017 adversarial examples defense challenge, outperforming adversarial training alone with a score of 0.773. The code for the method is publicly available.
Reach us at info@study.space