2024 | ADRIEN BENNETOT, IVAN DONADELLO, AYOUB EL QADI EL HAOUARI, MAURO DRAGONI, THOMAS FROSSARD, BENEDIKT WAGNER, ANNA SARRANTI, SILVIA TULLI, MARIA TROCAN, RAJA CHATILA, ANDREAS HOLZINGER, ARTUR D'AVILA GARCEZ, NATALIA DÍAZ-RODRÍGUEZ
This tutorial provides a practical guide to Explainable Artificial Intelligence (XAI) techniques for various data types, including tabular, image, and text data. It aims to address the need for transparency and explainability in machine learning models, particularly in critical domains like healthcare and finance. The guide introduces several XAI methods, such as SHAP (SHapley Additive exPlanations) and DiCE (Diverse Counterfactual Explanations), which help explain model decisions and provide actionable insights. SHAP is a model-agnostic method that uses game theory to explain the contribution of each feature to the model's prediction, while DiCE generates counterfactual explanations that highlight the minimal changes needed to alter the model's output. The tutorial also covers Grad-CAM for image models, which visualizes the regions of an image that are most relevant for the model's prediction. For language models, the guide explains the attention mechanism in transformers and introduces tools like Transformer-Interpret to enhance model interpretability. The tutorial includes Python notebooks and examples for each method, allowing users to apply the techniques to their specific use cases. It emphasizes the importance of explainability in ensuring trust and accountability in AI systems, particularly in high-stakes applications. The guide is intended for developers and practitioners with a computer science background who wish to understand and implement XAI techniques to improve the transparency and reliability of their models.This tutorial provides a practical guide to Explainable Artificial Intelligence (XAI) techniques for various data types, including tabular, image, and text data. It aims to address the need for transparency and explainability in machine learning models, particularly in critical domains like healthcare and finance. The guide introduces several XAI methods, such as SHAP (SHapley Additive exPlanations) and DiCE (Diverse Counterfactual Explanations), which help explain model decisions and provide actionable insights. SHAP is a model-agnostic method that uses game theory to explain the contribution of each feature to the model's prediction, while DiCE generates counterfactual explanations that highlight the minimal changes needed to alter the model's output. The tutorial also covers Grad-CAM for image models, which visualizes the regions of an image that are most relevant for the model's prediction. For language models, the guide explains the attention mechanism in transformers and introduces tools like Transformer-Interpret to enhance model interpretability. The tutorial includes Python notebooks and examples for each method, allowing users to apply the techniques to their specific use cases. It emphasizes the importance of explainability in ensuring trust and accountability in AI systems, particularly in high-stakes applications. The guide is intended for developers and practitioners with a computer science background who wish to understand and implement XAI techniques to improve the transparency and reliability of their models.