June 03-06, 2024 | Vera Schmitt, Luis-Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P. Spang, Sebastian Möller
This paper explores the role of explainability in collaborative human-AI disinformation detection. As AI-generated disinformation becomes a major global risk, the need for transparent and explainable AI systems is critical. The study evaluates the human-meaningfulness of different types of explanations in disinformation detection, focusing on their impact on performance, understandability, usefulness, and trust. A Wizard-of-Oz study with 433 participants, including 406 crowdworkers and 27 journalists, was conducted to assess the effectiveness of various XAI features. The results show that free-text explanations improve non-expert performance but do not affect expert performance. XAI features enhance perceived usefulness, understandability, and trust in AI systems, but can also lead to blind trust when AI predictions are incorrect. The study also examines the influence of media literacy and expectations towards AI on disinformation detection. Findings indicate that higher media literacy reduces blind trust in AI systems. The research contributes to the understanding of human-centered XAI evaluation and highlights the importance of explainability in ensuring transparency and trust in AI systems for disinformation detection. The study underscores the value of human-AI collaboration in content verification, especially in the absence of expert knowledge. The results emphasize the need for balanced AI system transparency to avoid information overload and overreliance on AI. The study also highlights the importance of considering individual background knowledge and expectations when designing XAI features for disinformation detection.This paper explores the role of explainability in collaborative human-AI disinformation detection. As AI-generated disinformation becomes a major global risk, the need for transparent and explainable AI systems is critical. The study evaluates the human-meaningfulness of different types of explanations in disinformation detection, focusing on their impact on performance, understandability, usefulness, and trust. A Wizard-of-Oz study with 433 participants, including 406 crowdworkers and 27 journalists, was conducted to assess the effectiveness of various XAI features. The results show that free-text explanations improve non-expert performance but do not affect expert performance. XAI features enhance perceived usefulness, understandability, and trust in AI systems, but can also lead to blind trust when AI predictions are incorrect. The study also examines the influence of media literacy and expectations towards AI on disinformation detection. Findings indicate that higher media literacy reduces blind trust in AI systems. The research contributes to the understanding of human-centered XAI evaluation and highlights the importance of explainability in ensuring transparency and trust in AI systems for disinformation detection. The study underscores the value of human-AI collaboration in content verification, especially in the absence of expert knowledge. The results emphasize the need for balanced AI system transparency to avoid information overload and overreliance on AI. The study also highlights the importance of considering individual background knowledge and expectations when designing XAI features for disinformation detection.