Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

27 Mar 2024 | Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen
This paper proposes 3D Depth Fool (3D²Fool), the first 3D texture-based adversarial attack against monocular depth estimation (MDE) models used in autonomous driving. Unlike previous 2D adversarial patch-based attacks, 3D²Fool generates 3D adversarial textures that are robust to various viewpoints and weather conditions such as rain and fog. The attack is designed to be effective across different vehicle types and scenarios, and it can cause MDE errors of over 10 meters in real-world experiments. The method involves two main modules: texture conversion (TC) and physical augmentation (PA). TC converts 2D adversarial textures into 3D camouflage textures that can be applied to various target objects, while PA simulates weather conditions to improve the attack's robustness. The attack is evaluated on multiple MDE models and shows superior performance in terms of depth estimation error and affected area. The method is also tested in real-world scenarios, where it successfully deceives MDE models even under varying lighting and weather conditions. The results demonstrate that 3D²Fool is more effective and robust than existing 2D adversarial patch-based attacks. The code for 3D²Fool is available at https://github.com/Gandolfczjh/3D2Fool.This paper proposes 3D Depth Fool (3D²Fool), the first 3D texture-based adversarial attack against monocular depth estimation (MDE) models used in autonomous driving. Unlike previous 2D adversarial patch-based attacks, 3D²Fool generates 3D adversarial textures that are robust to various viewpoints and weather conditions such as rain and fog. The attack is designed to be effective across different vehicle types and scenarios, and it can cause MDE errors of over 10 meters in real-world experiments. The method involves two main modules: texture conversion (TC) and physical augmentation (PA). TC converts 2D adversarial textures into 3D camouflage textures that can be applied to various target objects, while PA simulates weather conditions to improve the attack's robustness. The attack is evaluated on multiple MDE models and shows superior performance in terms of depth estimation error and affected area. The method is also tested in real-world scenarios, where it successfully deceives MDE models even under varying lighting and weather conditions. The results demonstrate that 3D²Fool is more effective and robust than existing 2D adversarial patch-based attacks. The code for 3D²Fool is available at https://github.com/Gandolfczjh/3D2Fool.
Reach us at info@study.space