2019 | David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, Guang-Zhong Yang
The paper "XAI - Explainable Artificial Intelligence" by David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang discusses the importance of providing explanations for AI systems to enhance user understanding, trust, and management. While recent advancements in machine learning (ML) have led to significant AI applications, many of these systems lack transparency in their decision-making processes. The authors highlight the trade-off between ML performance and explainability, noting that highly accurate models often lack interpretability.
The paper defines explainable AI (XAI) systems as those that provide intelligible explanations to users, covering their capabilities, actions, and future intentions. It emphasizes that explanations must be context-dependent and tailored to the user's task, abilities, and expectations. The authors also discuss the different types of explanations, including full and partial interpretations, and the constraints that interpretable models must adhere to.
User expectations for XAI vary, depending on the user group, such as intelligence analysts, developers, and policymakers. Effective explanations should consider the user's background knowledge and needs. The paper explores various methods for evaluating and measuring the effectiveness of explanations, including subjective measures like user satisfaction and objective measures like task performance.
Finally, the authors address several challenges in XAI, including the balance between accuracy and interpretability, the use of abstractions to simplify explanations, and the distinction between explaining competencies and decisions. They suggest that future research should focus on human-centered perspectives, aiming to enhance social roles for XAI beyond individual explanations and trust-building.The paper "XAI - Explainable Artificial Intelligence" by David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang discusses the importance of providing explanations for AI systems to enhance user understanding, trust, and management. While recent advancements in machine learning (ML) have led to significant AI applications, many of these systems lack transparency in their decision-making processes. The authors highlight the trade-off between ML performance and explainability, noting that highly accurate models often lack interpretability.
The paper defines explainable AI (XAI) systems as those that provide intelligible explanations to users, covering their capabilities, actions, and future intentions. It emphasizes that explanations must be context-dependent and tailored to the user's task, abilities, and expectations. The authors also discuss the different types of explanations, including full and partial interpretations, and the constraints that interpretable models must adhere to.
User expectations for XAI vary, depending on the user group, such as intelligence analysts, developers, and policymakers. Effective explanations should consider the user's background knowledge and needs. The paper explores various methods for evaluating and measuring the effectiveness of explanations, including subjective measures like user satisfaction and objective measures like task performance.
Finally, the authors address several challenges in XAI, including the balance between accuracy and interpretability, the use of abstractions to simplify explanations, and the distinction between explaining competencies and decisions. They suggest that future research should focus on human-centered perspectives, aiming to enhance social roles for XAI beyond individual explanations and trust-building.