January 25, 2024 | Philip Mavrepis, Georgios Makridis, Georgios Fatouros, Vasileios Koukos, Maria Margarita Separdani, Dimosthenis Kyriazis
This paper introduces "x-[plAIn]", a novel approach to make Explainable Artificial Intelligence (XAI) more accessible to a wider audience through a custom Large Language Model (LLM) developed using ChatGPT Builder. The goal is to design a model that can generate clear, concise summaries of various XAI methods tailored for different audiences, including business professionals and academics. The key feature of the model is its ability to adapt explanations to match the knowledge level and interests of different audience groups. The model provides timely, clear, and contextually relevant explanations, facilitating decision-making by end users. Results from use-case studies show that the model is effective in providing easy-to-understand, audience-specific explanations regardless of the XAI method used. This adaptability improves the accessibility of XAI, bridging the gap between complex AI technologies and their practical applications. The paper also discusses the challenges in communicating AI concepts, the importance of explainability and interpretability, and the role of LLMs in XAI. The methodology involves integrating XAI methods like LIME, SHAP, and GradCam into a GPT-based model to generate natural language explanations. The model is tested across various use cases, demonstrating its effectiveness in delivering audience-specific explanations. The results show that the model is preferred by users for its clarity and usability, particularly in decision-making contexts and image comprehension. The paper concludes that future enhancements should focus on allowing end-users to specify their preference for the level of detail required, and that the model has potential for experienced AI engineers to identify and mitigate biases in the model creation pipeline. The research was funded by the European Union's Project HumAIne.This paper introduces "x-[plAIn]", a novel approach to make Explainable Artificial Intelligence (XAI) more accessible to a wider audience through a custom Large Language Model (LLM) developed using ChatGPT Builder. The goal is to design a model that can generate clear, concise summaries of various XAI methods tailored for different audiences, including business professionals and academics. The key feature of the model is its ability to adapt explanations to match the knowledge level and interests of different audience groups. The model provides timely, clear, and contextually relevant explanations, facilitating decision-making by end users. Results from use-case studies show that the model is effective in providing easy-to-understand, audience-specific explanations regardless of the XAI method used. This adaptability improves the accessibility of XAI, bridging the gap between complex AI technologies and their practical applications. The paper also discusses the challenges in communicating AI concepts, the importance of explainability and interpretability, and the role of LLMs in XAI. The methodology involves integrating XAI methods like LIME, SHAP, and GradCam into a GPT-based model to generate natural language explanations. The model is tested across various use cases, demonstrating its effectiveness in delivering audience-specific explanations. The results show that the model is preferred by users for its clarity and usability, particularly in decision-making contexts and image comprehension. The paper concludes that future enhancements should focus on allowing end-users to specify their preference for the level of detail required, and that the model has potential for experienced AI engineers to identify and mitigate biases in the model creation pipeline. The research was funded by the European Union's Project HumAIne.