Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

December 30, 2019 | Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera
The article "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI" by Alejandro Barredo Arrieta et al. provides a comprehensive overview of the field of XAI, emphasizing its importance for the practical deployment of AI models. The authors define explainability in Machine Learning (ML) and propose a novel definition that considers the audience for which the explainability is sought. They also discuss different levels of transparency in ML models and post-hoc explainability techniques. The article reviews existing literature on XAI, including contributions to transparent models and post-hoc explainability, and presents two taxonomies: one for ML models and another specifically for Deep Learning models. The authors identify challenges in XAI, such as the need for evaluation metrics and the impact of XAI on data privacy and confidentiality. They conclude by discussing the concept of Responsible Artificial Intelligence, which emphasizes fairness, accountability, and privacy alongside explainability. The article aims to provide a thorough taxonomy and reference material for researchers and professionals to advance the field of XAI and ensure ethical AI deployment.The article "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI" by Alejandro Barredo Arrieta et al. provides a comprehensive overview of the field of XAI, emphasizing its importance for the practical deployment of AI models. The authors define explainability in Machine Learning (ML) and propose a novel definition that considers the audience for which the explainability is sought. They also discuss different levels of transparency in ML models and post-hoc explainability techniques. The article reviews existing literature on XAI, including contributions to transparent models and post-hoc explainability, and presents two taxonomies: one for ML models and another specifically for Deep Learning models. The authors identify challenges in XAI, such as the need for evaluation metrics and the impact of XAI on data privacy and confidentiality. They conclude by discussing the concept of Responsible Artificial Intelligence, which emphasizes fairness, accountability, and privacy alongside explainability. The article aims to provide a thorough taxonomy and reference material for researchers and professionals to advance the field of XAI and ensure ethical AI deployment.
Reach us at info@study.space