Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

December 30, 2019 | Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera
Explainable Artificial Intelligence (XAI) is crucial for the practical deployment of AI models, especially in critical applications where decisions must be understood and trusted. This paper reviews existing literature on XAI, proposes a novel definition of explainability that emphasizes the target audience, and presents a taxonomy of XAI methods for different machine learning models, including deep learning. It also identifies challenges and opportunities in XAI, such as data fusion and explainability, and highlights the importance of responsible AI, which integrates fairness, accountability, and transparency. The paper discusses the distinction between interpretability and explainability, and outlines various levels of transparency in machine learning models. It also presents post-hoc explainability techniques, such as text explanations, visualizations, and local explanations, and evaluates their effectiveness. The paper concludes that XAI is essential for ensuring the trustworthiness, fairness, and accountability of AI systems, and that responsible AI is a key goal for the future of AI development.Explainable Artificial Intelligence (XAI) is crucial for the practical deployment of AI models, especially in critical applications where decisions must be understood and trusted. This paper reviews existing literature on XAI, proposes a novel definition of explainability that emphasizes the target audience, and presents a taxonomy of XAI methods for different machine learning models, including deep learning. It also identifies challenges and opportunities in XAI, such as data fusion and explainability, and highlights the importance of responsible AI, which integrates fairness, accountability, and transparency. The paper discusses the distinction between interpretability and explainability, and outlines various levels of transparency in machine learning models. It also presents post-hoc explainability techniques, such as text explanations, visualizations, and local explanations, and evaluates their effectiveness. The paper concludes that XAI is essential for ensuring the trustworthiness, fairness, and accountability of AI systems, and that responsible AI is a key goal for the future of AI development.
Reach us at info@study.space