9 May 2024 | Alexandra Zytek, Sara Pidó, Kalyan Veeramachaneni
This paper explores the use of Large Language Models (LLMs) to enhance Explainable Artificial Intelligence (XAI) by transforming ML explanations into natural, human-readable narratives. The authors focus on refining existing XAI explanations rather than directly explaining ML models using LLMs. They outline several research directions, including defining evaluation metrics, prompt design, comparing LLM models, exploring further training methods, and integrating external data. Initial experiments and a user study suggest that LLMs, particularly GPT-4, can generate sound, complete, and context-aware explanations. The study found that participants preferred narrative-based explanations, finding them easier to understand and more informative. The authors conclude that LLM-based narrative explanations can improve user understanding of ML outputs and contribute to making AI systems more transparent, interpretable, and usable. Future work will involve further investigation into fine-tuning methods, additional LLMs, and integrating training data and external guides to create context-aware explanations.This paper explores the use of Large Language Models (LLMs) to enhance Explainable Artificial Intelligence (XAI) by transforming ML explanations into natural, human-readable narratives. The authors focus on refining existing XAI explanations rather than directly explaining ML models using LLMs. They outline several research directions, including defining evaluation metrics, prompt design, comparing LLM models, exploring further training methods, and integrating external data. Initial experiments and a user study suggest that LLMs, particularly GPT-4, can generate sound, complete, and context-aware explanations. The study found that participants preferred narrative-based explanations, finding them easier to understand and more informative. The authors conclude that LLM-based narrative explanations can improve user understanding of ML outputs and contribute to making AI systems more transparent, interpretable, and usable. Future work will involve further investigation into fine-tuning methods, additional LLMs, and integrating training data and external guides to create context-aware explanations.