2024 | ADRIEN BENNETOT, Sorbonne Université, Paris, France; IVAN DONADELLO, Free University of Bozen-Bolzano, Bolzano, Italy; AYOUB EL QADI EL HAOUARI, Sorbonne Universite, Paris, France and Tinubu Square, Paris, France; MAURO DRAGONI, Fondazione Bruno Kessler, Trento, Italy; THOMAS FROSSARD, Tinubu Square, Paris, France; BENEDIKT WAGNER, City University of London, London, United Kingdom of Great Britain and Northern Ireland; ANNA SARRANTI, University of Natural Resources and Life Sciences Vienna, Wien, Austria; SILVIA TULLI, Sorbonne Universite, Paris, France; MARIA TROCAN, Institut Supérieur d'Électronique de Paris (ISEP), Paris, France; RAJA CHATILA, Sorbonne Universite, Paris, France; ANDREAS HOLZINGER, University of Natural Resources and Life Sciences Vienna, Wien, Austria and Medical University Graz Centre-Independent Institutes, Graz, Austria; ARTUR D'AVILA GARCEZ, City University, London, United Kingdom of Great Britain and Northern Ireland; NATALIA DÍAZ-RODRÍGUEZ, University of Granada, Granada, Spain
This article provides a practical guide to Explainable Artificial Intelligence (XAI) techniques, aimed at computer science professionals. It addresses the growing need for transparency and explainability in machine learning models, particularly in critical applications where decisions can have significant consequences. The guide covers various XAI methods for different types of data, including tabular, image, and text data, as well as neural-symbolic computation. Each method is accompanied by Python notebooks and examples to facilitate practical application. Key techniques discussed include SHapley Additive exPlanations (SHAP), Diverse Counterfactual Explanations (DiCE), Gradient-weighted Class Activation Mapping (Grad-CAM), and Integrated Gradients (IG). The article also explores user interface aspects, such as natural language generation and counterfactual explanations, to enhance interaction between systems and non-technical users. The goal is to provide a comprehensive resource for developers and researchers to understand and apply XAI techniques effectively.This article provides a practical guide to Explainable Artificial Intelligence (XAI) techniques, aimed at computer science professionals. It addresses the growing need for transparency and explainability in machine learning models, particularly in critical applications where decisions can have significant consequences. The guide covers various XAI methods for different types of data, including tabular, image, and text data, as well as neural-symbolic computation. Each method is accompanied by Python notebooks and examples to facilitate practical application. Key techniques discussed include SHapley Additive exPlanations (SHAP), Diverse Counterfactual Explanations (DiCE), Gradient-weighted Class Activation Mapping (Grad-CAM), and Integrated Gradients (IG). The article also explores user interface aspects, such as natural language generation and counterfactual explanations, to enhance interaction between systems and non-technical users. The goal is to provide a comprehensive resource for developers and researchers to understand and apply XAI techniques effectively.