Tim Miller explores the role of social sciences in explainable artificial intelligence (XAI), emphasizing the need to integrate insights from philosophy, psychology, and cognitive science to improve the design of explainable AI systems. The paper argues that current research in XAI often relies on researchers' intuitions rather than established social science frameworks, which provide a deeper understanding of how people generate, select, evaluate, and present explanations. It reviews relevant literature from these disciplines, highlighting key findings such as the contrastive nature of explanations, the role of cognitive biases, and the social context of explanation. The paper also discusses the importance of aligning AI explanations with human expectations and social norms, and how this can enhance trust and usability. It emphasizes that explanations are not just about causal relationships but also involve social interaction and the transfer of knowledge. The paper concludes that integrating social science insights into XAI can lead to more effective and user-friendly systems. Key findings include the contrastive nature of explanations, the influence of cognitive biases on explanation selection, the limited effectiveness of probabilistic reasoning in explanations, and the social aspect of explanations. The paper advocates for a multidisciplinary approach to XAI, combining insights from philosophy, psychology, and cognitive science to improve the design and implementation of explainable AI systems.Tim Miller explores the role of social sciences in explainable artificial intelligence (XAI), emphasizing the need to integrate insights from philosophy, psychology, and cognitive science to improve the design of explainable AI systems. The paper argues that current research in XAI often relies on researchers' intuitions rather than established social science frameworks, which provide a deeper understanding of how people generate, select, evaluate, and present explanations. It reviews relevant literature from these disciplines, highlighting key findings such as the contrastive nature of explanations, the role of cognitive biases, and the social context of explanation. The paper also discusses the importance of aligning AI explanations with human expectations and social norms, and how this can enhance trust and usability. It emphasizes that explanations are not just about causal relationships but also involve social interaction and the transfer of knowledge. The paper concludes that integrating social science insights into XAI can lead to more effective and user-friendly systems. Key findings include the contrastive nature of explanations, the influence of cognitive biases on explanation selection, the limited effectiveness of probabilistic reasoning in explanations, and the social aspect of explanations. The paper advocates for a multidisciplinary approach to XAI, combining insights from philosophy, psychology, and cognitive science to improve the design and implementation of explainable AI systems.