Universal adversarial perturbations

Universal adversarial perturbations

9 Mar 2017 | Seyed-Mohsen Moosavi-Dezfooli*, Alhussein Fawzi*, Omar Fawzi†, Pascal Frossard†
The paper presents the existence of universal perturbations that can misclassify natural images with high probability, despite being quasi-imperceptible to the human eye. The authors propose an algorithm to compute these perturbations and demonstrate their effectiveness on state-of-the-art deep neural networks. They show that these perturbations generalize well across different neural networks and data distributions, making them *doubly universal*. The study also reveals geometric correlations in the decision boundaries of classifiers, which are exploited by these perturbations. The paper further analyzes the vulnerability of deep neural networks to such perturbations and discusses potential security implications. The findings highlight the need for more robust classifiers and provide insights into the geometric structure of deep neural networks.The paper presents the existence of universal perturbations that can misclassify natural images with high probability, despite being quasi-imperceptible to the human eye. The authors propose an algorithm to compute these perturbations and demonstrate their effectiveness on state-of-the-art deep neural networks. They show that these perturbations generalize well across different neural networks and data distributions, making them *doubly universal*. The study also reveals geometric correlations in the decision boundaries of classifiers, which are exploited by these perturbations. The paper further analyzes the vulnerability of deep neural networks to such perturbations and discusses potential security implications. The findings highlight the need for more robust classifiers and provide insights into the geometric structure of deep neural networks.
Reach us at info@study.space
[slides and audio] Universal Adversarial Perturbations