Adversarial Patch

Adversarial Patch

17 May 2018 | Tom B. Brown, Dandelion Mané, Aurko Roy, Martin Abadi, Justin Gilmer
The paper presents a method to create universal, robust, and targeted adversarial image patches in the real world. These patches can be printed, added to any scene, and photographed to mislead image classifiers, causing them to output a chosen target class even when the patches are small. The patches are universal because they work across different scenes, robust because they withstand various transformations, and targeted because they can force the classifier to output any desired class. The authors demonstrate the effectiveness of their attack through experiments on multiple image classification models, including VGG16, and show that the patches can be camouflaged to reduce their saliency to human observers. They also test the transferability of the patches to the physical world, proving that they can successfully fool classifiers in real-world scenarios. The paper concludes by highlighting the importance of defending against large, local perturbations in addition to small $L_p$ perturbations, as potential attackers may opt for more effective but noticeable perturbations.The paper presents a method to create universal, robust, and targeted adversarial image patches in the real world. These patches can be printed, added to any scene, and photographed to mislead image classifiers, causing them to output a chosen target class even when the patches are small. The patches are universal because they work across different scenes, robust because they withstand various transformations, and targeted because they can force the classifier to output any desired class. The authors demonstrate the effectiveness of their attack through experiments on multiple image classification models, including VGG16, and show that the patches can be camouflaged to reduce their saliency to human observers. They also test the transferability of the patches to the physical world, proving that they can successfully fool classifiers in real-world scenarios. The paper concludes by highlighting the importance of defending against large, local perturbations in addition to small $L_p$ perturbations, as potential attackers may opt for more effective but noticeable perturbations.
Reach us at info@study.space