19 July 2024 | Georgios Kostopoulus, Gregory Davrazos and Sotiris Kotsiantis
This review article provides a comprehensive overview of the evolving landscape of Explainable Artificial Intelligence (XAI) in Decision Support Systems (DSSs). As AI continues to play a crucial role in decision-making across various domains, the need for transparency, interpretability, and trust becomes paramount. The article examines methodologies, applications, challenges, and future research directions in integrating explainability within AI-based DSSs. It analyzes current research and practical implementations to guide researchers, practitioners, and decision-makers in navigating the complex landscape of XAI-based DSSs. These systems assist end-users in decision-making by providing a full picture of how decisions are made, thereby boosting trust. The article proposes a methodological taxonomy of current methodologies and presents representative works. Recent studies show growing interest in applying XDSSs in fields such as medical diagnosis, manufacturing, and education, as they balance accuracy and explainability, boost confidence, and validate decisions.
XAI is a sub-field of AI that provides methodologies for explaining and interpreting results from complex ML models. It aims to create human-comprehensible AI models that enable end-users to understand and trust predictions while ensuring high accuracy. XDSSs, which are human-centered DSSs with improved explainability, have gained attention for their ability to enhance transparency, accuracy, compliance, collaboration, cost savings, user experience, and confidence. The article presents a taxonomy of XDSSs based on five criteria: visual, rule-based, case-based, natural language, and knowledge-based explainability. It discusses various techniques such as LIME, SHAP, production rule systems, tree-based systems, case-based reasoning, natural language generation, and expert systems. The article also highlights applications of XDSSs in healthcare, transport, manufacturing, finance, education, and other domains, demonstrating their effectiveness in improving decision-making processes. The review concludes that XAI-based DSSs are essential for enhancing transparency, interpretability, and trust in AI-driven decisions, and that future research should focus on developing more effective and user-friendly XDSSs.This review article provides a comprehensive overview of the evolving landscape of Explainable Artificial Intelligence (XAI) in Decision Support Systems (DSSs). As AI continues to play a crucial role in decision-making across various domains, the need for transparency, interpretability, and trust becomes paramount. The article examines methodologies, applications, challenges, and future research directions in integrating explainability within AI-based DSSs. It analyzes current research and practical implementations to guide researchers, practitioners, and decision-makers in navigating the complex landscape of XAI-based DSSs. These systems assist end-users in decision-making by providing a full picture of how decisions are made, thereby boosting trust. The article proposes a methodological taxonomy of current methodologies and presents representative works. Recent studies show growing interest in applying XDSSs in fields such as medical diagnosis, manufacturing, and education, as they balance accuracy and explainability, boost confidence, and validate decisions.
XAI is a sub-field of AI that provides methodologies for explaining and interpreting results from complex ML models. It aims to create human-comprehensible AI models that enable end-users to understand and trust predictions while ensuring high accuracy. XDSSs, which are human-centered DSSs with improved explainability, have gained attention for their ability to enhance transparency, accuracy, compliance, collaboration, cost savings, user experience, and confidence. The article presents a taxonomy of XDSSs based on five criteria: visual, rule-based, case-based, natural language, and knowledge-based explainability. It discusses various techniques such as LIME, SHAP, production rule systems, tree-based systems, case-based reasoning, natural language generation, and expert systems. The article also highlights applications of XDSSs in healthcare, transport, manufacturing, finance, education, and other domains, demonstrating their effectiveness in improving decision-making processes. The review concludes that XAI-based DSSs are essential for enhancing transparency, interpretability, and trust in AI-driven decisions, and that future research should focus on developing more effective and user-friendly XDSSs.