Explanation in Artificial Intelligence: Insights from the Social Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

August 16, 2018 | Tim Miller
This paper explores the integration of social science research into the field of explainable artificial intelligence (XAI). It argues that current XAI research often relies on researchers' intuition rather than established theories from philosophy, psychology, and cognitive science. The paper reviews relevant literature on explanation, highlighting four key findings: (1) explanations are contrastive, focusing on why an event occurred instead of its causes; (2) explanation selection is biased, influenced by cognitive biases; (3) probabilities are not always crucial, as causal explanations are more effective; and (4) explanations are social, involving interactions between the explainer and explainee. The paper emphasizes the importance of these findings for building truly explainable AI, particularly in contexts where trust and transparency are crucial. It also discusses the role of abductive reasoning in explanation and the need for a comprehensive understanding of human explanation processes to enhance AI systems.This paper explores the integration of social science research into the field of explainable artificial intelligence (XAI). It argues that current XAI research often relies on researchers' intuition rather than established theories from philosophy, psychology, and cognitive science. The paper reviews relevant literature on explanation, highlighting four key findings: (1) explanations are contrastive, focusing on why an event occurred instead of its causes; (2) explanation selection is biased, influenced by cognitive biases; (3) probabilities are not always crucial, as causal explanations are more effective; and (4) explanations are social, involving interactions between the explainer and explainee. The paper emphasizes the importance of these findings for building truly explainable AI, particularly in contexts where trust and transparency are crucial. It also discusses the role of abductive reasoning in explanation and the need for a comprehensive understanding of human explanation processes to enhance AI systems.
Reach us at info@study.space
Understanding Explanation in Artificial Intelligence%3A Insights from the Social Sciences