June 03–06, 2024, Rio de Janeiro, Brazil | Vera Schmitt, Luis-Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P. Spang, Sebastian Möller
This paper explores the role of explainability in collaborative human-AI disinformation detection, emphasizing the importance of transparency and human-centered evaluation. The study evaluates different types of explanations ( highlights, free-text, and structured) and their impact on performance, understandability, usefulness, and trust. A total of 433 participants, including 406 crowdworkers and 27 journalists, were involved in a Wizard-of-Oz study. The results show that free-text explanations significantly improve the performance of non-experts, while they do not affect the performance of experts. XAI features enhance perceived usefulness, understandability, and trust in the AI system but can also lead to blind trust when the AI makes incorrect predictions. Media literacy is found to positively influence accuracy, reducing the tendency towards blind trust. The study concludes that human-AI collaboration is crucial for content verification tasks, especially in the context of disinformation detection, and that free-text explanations can bridge the gap between human and AI capabilities.This paper explores the role of explainability in collaborative human-AI disinformation detection, emphasizing the importance of transparency and human-centered evaluation. The study evaluates different types of explanations ( highlights, free-text, and structured) and their impact on performance, understandability, usefulness, and trust. A total of 433 participants, including 406 crowdworkers and 27 journalists, were involved in a Wizard-of-Oz study. The results show that free-text explanations significantly improve the performance of non-experts, while they do not affect the performance of experts. XAI features enhance perceived usefulness, understandability, and trust in the AI system but can also lead to blind trust when the AI makes incorrect predictions. Media literacy is found to positively influence accuracy, reducing the tendency towards blind trust. The study concludes that human-AI collaboration is crucial for content verification tasks, especially in the context of disinformation detection, and that free-text explanations can bridge the gap between human and AI capabilities.