This paper presents the existence of universal adversarial perturbations that can fool state-of-the-art deep neural networks on natural images. These perturbations are small, quasi-imperceptible, and image-agnostic, meaning they work across different images. The authors propose an algorithm to compute such perturbations, which iteratively builds a perturbation vector that causes most images to be misclassified. The algorithm is shown to work across different neural networks and is robust to variations in input data. The results show that these perturbations can fool over 90% of images in validation sets for several deep neural networks, including CaffeNet, VGG-F, and GoogLeNet. The perturbations are also shown to generalize across different architectures, indicating that they are doubly universal, working across both data and network structures. The paper further explains the vulnerability of deep neural networks to these perturbations by analyzing the geometric correlations between different parts of the decision boundary. The results suggest that the decision boundaries of deep neural networks have significant redundancies and correlations, which can be exploited by universal perturbations. The paper concludes that these findings have important implications for the security of deep neural networks, as they reveal vulnerabilities that can be exploited by adversaries. The authors also show that fine-tuning the networks with perturbed images does not significantly improve their robustness against universal perturbations. The paper highlights the importance of understanding the geometric properties of decision boundaries in deep neural networks to improve their robustness against adversarial attacks.This paper presents the existence of universal adversarial perturbations that can fool state-of-the-art deep neural networks on natural images. These perturbations are small, quasi-imperceptible, and image-agnostic, meaning they work across different images. The authors propose an algorithm to compute such perturbations, which iteratively builds a perturbation vector that causes most images to be misclassified. The algorithm is shown to work across different neural networks and is robust to variations in input data. The results show that these perturbations can fool over 90% of images in validation sets for several deep neural networks, including CaffeNet, VGG-F, and GoogLeNet. The perturbations are also shown to generalize across different architectures, indicating that they are doubly universal, working across both data and network structures. The paper further explains the vulnerability of deep neural networks to these perturbations by analyzing the geometric correlations between different parts of the decision boundary. The results suggest that the decision boundaries of deep neural networks have significant redundancies and correlations, which can be exploited by universal perturbations. The paper concludes that these findings have important implications for the security of deep neural networks, as they reveal vulnerabilities that can be exploited by adversaries. The authors also show that fine-tuning the networks with perturbed images does not significantly improve their robustness against universal perturbations. The paper highlights the importance of understanding the geometric properties of decision boundaries in deep neural networks to improve their robustness against adversarial attacks.