Learning to Transform Dynamically for Better Adversarial Transferability

Learning to Transform Dynamically for Better Adversarial Transferability

24 Jul 2024 | Rongyi Zhu, Zeliang Zhang, Susan Liang, Zhuo Liu, Chenliang Xu
Learning to Transform (L2T) is a novel approach to enhance adversarial transferability by dynamically selecting optimal input transformations in each iteration. Adversarial examples, which are imperceptible to humans, can deceive neural networks. Existing methods use fixed transformations to generate adversarial examples, but their effectiveness is limited by the finite number of available transformations. L2T addresses this by selecting the optimal combination of transformations from a pool of candidates, improving adversarial transferability. The selection of optimal transformation combinations is conceptualized as a trajectory optimization problem, and a reinforcement learning strategy is employed to solve it. Comprehensive experiments on the ImageNet dataset, as well as practical tests with Google Vision and GPT-4V, show that L2T outperforms current methodologies in enhancing adversarial transferability. The code is available at https://github.com/RongyiZhu/L2T. The method dynamically learns and applies the optimal input transformation in each iteration, reducing the search space and better utilizing transformations to improve diversity. In each iteration of the adversarial attack, a subset of transformations is sampled and applied to the adversarial examples, with sampling probabilities updated by gradient ascent to maximize the loss. L2T is more efficient for adversarial example generation compared to other learn-based adversarial attack methods. The method is evaluated on various models, defense mechanisms, and vision APIs, demonstrating its effectiveness in improving adversarial transferability. The results show that L2T outperforms other baselines in terms of attack success rates, indicating its practical significance.Learning to Transform (L2T) is a novel approach to enhance adversarial transferability by dynamically selecting optimal input transformations in each iteration. Adversarial examples, which are imperceptible to humans, can deceive neural networks. Existing methods use fixed transformations to generate adversarial examples, but their effectiveness is limited by the finite number of available transformations. L2T addresses this by selecting the optimal combination of transformations from a pool of candidates, improving adversarial transferability. The selection of optimal transformation combinations is conceptualized as a trajectory optimization problem, and a reinforcement learning strategy is employed to solve it. Comprehensive experiments on the ImageNet dataset, as well as practical tests with Google Vision and GPT-4V, show that L2T outperforms current methodologies in enhancing adversarial transferability. The code is available at https://github.com/RongyiZhu/L2T. The method dynamically learns and applies the optimal input transformation in each iteration, reducing the search space and better utilizing transformations to improve diversity. In each iteration of the adversarial attack, a subset of transformations is sampled and applied to the adversarial examples, with sampling probabilities updated by gradient ascent to maximize the loss. L2T is more efficient for adversarial example generation compared to other learn-based adversarial attack methods. The method is evaluated on various models, defense mechanisms, and vision APIs, demonstrating its effectiveness in improving adversarial transferability. The results show that L2T outperforms other baselines in terms of attack success rates, indicating its practical significance.
Reach us at info@study.space
[slides] Learning to Transform Dynamically for Better Adversarial Transferability | StudySpace