Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review

Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review

3 Jul 2024 | Anton Kuznietsov, Balint Gyevnar, Cheng Wang, Steven Peters, Stefano V. Albrecht
The paper "Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review" by Anton Kuznietsov, Balint Gyevar, Cheng Wang, Steven Peters, and Stefano V. Albrecht provides a comprehensive review of explainable AI (XAI) techniques for enhancing the safety and trustworthiness of autonomous driving (AD). The authors highlight the importance of XAI in addressing the challenges posed by complex AI systems in AD, particularly in ensuring transparency and interpretability. They begin by analyzing the requirements for AI in AD, focusing on data, model, and agency, and argue that XAI is fundamental to meeting these requirements. The paper then describes the sources of explanations in AI and presents a taxonomy of XAI, identifying five key contributions of XAI for safe and trustworthy AD: interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, the authors propose a conceptual modular framework called SafeX to integrate these XAI methods, enabling explanation delivery to users while ensuring the safety of AI models. The review covers both modular and end-to-end (E2E) pipelines, focusing on perception, planning and prediction, and control, and discusses the limitations of existing modular XAI frameworks for AD. The paper aims to provide a systematic and repeatable review methodology, contributing to the growing body of literature on XAI for AD.The paper "Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review" by Anton Kuznietsov, Balint Gyevar, Cheng Wang, Steven Peters, and Stefano V. Albrecht provides a comprehensive review of explainable AI (XAI) techniques for enhancing the safety and trustworthiness of autonomous driving (AD). The authors highlight the importance of XAI in addressing the challenges posed by complex AI systems in AD, particularly in ensuring transparency and interpretability. They begin by analyzing the requirements for AI in AD, focusing on data, model, and agency, and argue that XAI is fundamental to meeting these requirements. The paper then describes the sources of explanations in AI and presents a taxonomy of XAI, identifying five key contributions of XAI for safe and trustworthy AD: interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, the authors propose a conceptual modular framework called SafeX to integrate these XAI methods, enabling explanation delivery to users while ensuring the safety of AI models. The review covers both modular and end-to-end (E2E) pipelines, focusing on perception, planning and prediction, and control, and discusses the limitations of existing modular XAI frameworks for AD. The paper aims to provide a systematic and repeatable review methodology, contributing to the growing body of literature on XAI for AD.
Reach us at info@study.space