Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

AUGUST 2017 | Naveed Akhtar and Ajmal Mian
This survey presents a comprehensive overview of adversarial attacks on deep learning in computer vision. Deep learning has become the backbone of many computer vision applications, but it is vulnerable to adversarial attacks—subtle perturbations to inputs that cause models to make incorrect predictions. These attacks are often imperceptible to humans but can significantly affect model performance. The survey reviews the design, analysis, and defense mechanisms of adversarial attacks, as well as their real-world applications. The survey begins with definitions of key terms, including adversarial examples, perturbations, and attack types (e.g., black-box, white-box). It then reviews various adversarial attack methods, such as the Fast Gradient Sign Method (FGSM), Iterative Least-likely Class Method (ILCM), and Jacobian-based Saliency Map Attack (JSMA), which generate adversarial examples to fool classifiers. It also discusses universal adversarial perturbations, which can fool multiple models across different images. The survey further explores attacks beyond classification, including those on autoencoders, generative models, recurrent neural networks, and deep reinforcement learning. It also addresses real-world applications of adversarial attacks, such as attacks on face attributes and cell-phone camera systems. These attacks demonstrate that adversarial examples can be physically printed and used to misclassify objects in real-world scenarios. The survey concludes with a broader outlook on the research direction, emphasizing the importance of understanding and defending against adversarial attacks in practical applications. The findings highlight the need for robust defenses and further research to mitigate the risks posed by adversarial attacks in deep learning systems.This survey presents a comprehensive overview of adversarial attacks on deep learning in computer vision. Deep learning has become the backbone of many computer vision applications, but it is vulnerable to adversarial attacks—subtle perturbations to inputs that cause models to make incorrect predictions. These attacks are often imperceptible to humans but can significantly affect model performance. The survey reviews the design, analysis, and defense mechanisms of adversarial attacks, as well as their real-world applications. The survey begins with definitions of key terms, including adversarial examples, perturbations, and attack types (e.g., black-box, white-box). It then reviews various adversarial attack methods, such as the Fast Gradient Sign Method (FGSM), Iterative Least-likely Class Method (ILCM), and Jacobian-based Saliency Map Attack (JSMA), which generate adversarial examples to fool classifiers. It also discusses universal adversarial perturbations, which can fool multiple models across different images. The survey further explores attacks beyond classification, including those on autoencoders, generative models, recurrent neural networks, and deep reinforcement learning. It also addresses real-world applications of adversarial attacks, such as attacks on face attributes and cell-phone camera systems. These attacks demonstrate that adversarial examples can be physically printed and used to misclassify objects in real-world scenarios. The survey concludes with a broader outlook on the research direction, emphasizing the importance of understanding and defending against adversarial attacks in practical applications. The findings highlight the need for robust defenses and further research to mitigate the risks posed by adversarial attacks in deep learning systems.
Reach us at info@study.space