Adversarial Examples for Semantic Segmentation and Object Detection

Adversarial Examples for Semantic Segmentation and Object Detection

21 Jul 2017 | Cihang Xie1*, Jianyu Wang2*, Zhishuai Zhang1*, Yuyin Zhou1, Lingxi Xie1, Alan Yuille1
This paper extends the concept of adversarial examples, which are visually imperceptible perturbations that cause deep networks to fail in image classification, to the more challenging tasks of semantic segmentation and object detection. The authors propose a novel algorithm called Dense Adversary Generation (DAG) to generate adversarial examples for these tasks. DAG optimizes a loss function over a set of pixels or proposals to create adversarial perturbations that can confuse both segmentation and detection networks. The paper demonstrates that these adversarial perturbations can be transferred across different networks with different training data, architectures, and even different recognition tasks. The transferability is more significant when the networks have the same architecture. Additionally, combining heterogeneous perturbations often leads to better adversarial performance, providing an effective method for black-box attacks. The authors also investigate the impact of the denseness of proposals and the convergence of the DAG algorithm, showing that denser sampling of proposals and a reasonable number of iterations are crucial for effective adversarial generation. The paper concludes by discussing the implications of the transferability of adversarial perturbations and the potential for future research.This paper extends the concept of adversarial examples, which are visually imperceptible perturbations that cause deep networks to fail in image classification, to the more challenging tasks of semantic segmentation and object detection. The authors propose a novel algorithm called Dense Adversary Generation (DAG) to generate adversarial examples for these tasks. DAG optimizes a loss function over a set of pixels or proposals to create adversarial perturbations that can confuse both segmentation and detection networks. The paper demonstrates that these adversarial perturbations can be transferred across different networks with different training data, architectures, and even different recognition tasks. The transferability is more significant when the networks have the same architecture. Additionally, combining heterogeneous perturbations often leads to better adversarial performance, providing an effective method for black-box attacks. The authors also investigate the impact of the denseness of proposals and the convergence of the DAG algorithm, showing that denser sampling of proposals and a reasonable number of iterations are crucial for effective adversarial generation. The paper concludes by discussing the implications of the transferability of adversarial perturbations and the potential for future research.
Reach us at info@study.space