Improving Transferability of Adversarial Examples with Input Diversity

Improving Transferability of Adversarial Examples with Input Diversity

1 Jun 2019 | Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, Alan Yuille
The paper "Improving Transferability of Adversarial Examples with Input Diversity" addresses the issue of generating adversarial examples that are more effective across different networks, particularly in the challenging black-box setting where the attacker has no knowledge of the model structure and parameters. The authors propose a method called Diverse Inputs Iterative Fast Gradient Sign Method (DI²-FGSM) to enhance the transferability of adversarial examples by creating diverse input patterns. Unlike traditional methods that use only the original images, DI²-FGSM applies random transformations to the input images at each iteration, such as random resizing and padding, to generate more robust and transferable adversarial examples. The authors conduct extensive experiments on the ImageNet dataset, showing that their method significantly outperforms existing baselines in terms of success rates on both white-box and black-box models. They also evaluate their method against top defense solutions and official baselines from the NIPS 2017 adversarial competition, achieving an average success rate of 73.0%, which is a significant improvement over the top submission by 6.6%. The paper discusses the motivation behind the proposed method, which is inspired by data augmentation techniques used to prevent overfitting in training networks. It also explores the relationship between different attack methods, such as Fast Gradient Sign Method (FGSM), Iterative Fast Gradient Sign Method (I-FGSM), and Momentum Iterative Fast Gradient Sign Method (MI-FGSM). The authors further demonstrate the effectiveness of their method by attacking an ensemble of networks, which can generate stronger adversarial examples. Ablation studies are conducted to analyze the impact of various parameters, including the transformation probability, total iteration number, and step size. The results show that these parameters significantly influence the success rates of the proposed method. The paper concludes by discussing the hypothesis that diverse input patterns help generate adversarial examples that are more robust to small transformations, increasing their ability to fool other networks. Overall, the paper provides a strong benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future.The paper "Improving Transferability of Adversarial Examples with Input Diversity" addresses the issue of generating adversarial examples that are more effective across different networks, particularly in the challenging black-box setting where the attacker has no knowledge of the model structure and parameters. The authors propose a method called Diverse Inputs Iterative Fast Gradient Sign Method (DI²-FGSM) to enhance the transferability of adversarial examples by creating diverse input patterns. Unlike traditional methods that use only the original images, DI²-FGSM applies random transformations to the input images at each iteration, such as random resizing and padding, to generate more robust and transferable adversarial examples. The authors conduct extensive experiments on the ImageNet dataset, showing that their method significantly outperforms existing baselines in terms of success rates on both white-box and black-box models. They also evaluate their method against top defense solutions and official baselines from the NIPS 2017 adversarial competition, achieving an average success rate of 73.0%, which is a significant improvement over the top submission by 6.6%. The paper discusses the motivation behind the proposed method, which is inspired by data augmentation techniques used to prevent overfitting in training networks. It also explores the relationship between different attack methods, such as Fast Gradient Sign Method (FGSM), Iterative Fast Gradient Sign Method (I-FGSM), and Momentum Iterative Fast Gradient Sign Method (MI-FGSM). The authors further demonstrate the effectiveness of their method by attacking an ensemble of networks, which can generate stronger adversarial examples. Ablation studies are conducted to analyze the impact of various parameters, including the transformation probability, total iteration number, and step size. The results show that these parameters significantly influence the success rates of the proposed method. The paper concludes by discussing the hypothesis that diverse input patterns help generate adversarial examples that are more robust to small transformations, increasing their ability to fool other networks. Overall, the paper provides a strong benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future.
Reach us at info@study.space