Adversarial Examples for Semantic Segmentation and Object Detection

Adversarial Examples for Semantic Segmentation and Object Detection

21 Jul 2017 | Cihang Xie1*, Jianyu Wang2*, Zhishuai Zhang1*, Yuyin Zhou1, Lingxi Xie1, Alan Yuille1
This paper presents a novel algorithm called Dense Adversary Generation (DAG) for generating adversarial examples in semantic segmentation and object detection. Unlike traditional adversarial examples in image classification, which target single pixels, DAG targets multiple targets (e.g., pixels or object proposals) simultaneously, making it more challenging. The algorithm optimizes a loss function over a set of targets to generate adversarial perturbations that cause deep networks to misclassify multiple targets. DAG is effective across various deep networks and can generate perturbations that are transferable across different architectures and tasks. The perturbations are visually imperceptible, with minimal intensity changes, and can be combined to enhance transferability and effectiveness in black-box attacks. Experiments show that DAG significantly reduces the accuracy of both segmentation and detection networks, demonstrating its effectiveness. The algorithm is also robust to different network structures and can be applied to a wide range of tasks, including semantic segmentation and object detection. The results highlight the vulnerability of deep networks to adversarial examples and suggest that the architecture of the network plays a crucial role in the effectiveness of these attacks.This paper presents a novel algorithm called Dense Adversary Generation (DAG) for generating adversarial examples in semantic segmentation and object detection. Unlike traditional adversarial examples in image classification, which target single pixels, DAG targets multiple targets (e.g., pixels or object proposals) simultaneously, making it more challenging. The algorithm optimizes a loss function over a set of targets to generate adversarial perturbations that cause deep networks to misclassify multiple targets. DAG is effective across various deep networks and can generate perturbations that are transferable across different architectures and tasks. The perturbations are visually imperceptible, with minimal intensity changes, and can be combined to enhance transferability and effectiveness in black-box attacks. Experiments show that DAG significantly reduces the accuracy of both segmentation and detection networks, demonstrating its effectiveness. The algorithm is also robust to different network structures and can be applied to a wide range of tasks, including semantic segmentation and object detection. The results highlight the vulnerability of deep networks to adversarial examples and suggest that the architecture of the network plays a crucial role in the effectiveness of these attacks.
Reach us at info@study.space