May 11-16, 2024 | Liudmila, Zavolokina; Kilian, Sprenkamp; Zoya, Katashinskaya; Daniel Gordon, Jones; Gerhard, Schwabe
This paper introduces ClarifAI, an automated propaganda detection tool designed to enhance critical thinking by activating System 2 analytical thinking in users. Based on Kahneman's dual-system theory, ClarifAI uses Large Language Models (LLMs) to detect propaganda in news articles and provides context-rich explanations to foster deeper understanding. The tool is designed to nudge users from System 1 (fast, intuitive thinking) to System 2 (slow, analytical thinking), encouraging more critical news consumption. The study demonstrates that ClarifAI effectively promotes critical reading through an online experiment, showing that users who engage with the tool are more likely to critically evaluate news content. The tool's explanations are crucial for fostering critical thinking, as they help users understand the 'why' and 'how' behind detected propaganda. The research also highlights the importance of transparency and user education in propaganda detection. The study contributes to the field by proposing a practical tool and design knowledge for mitigating propaganda in digital news. The tool was developed using the design science research methodology (DSRM), involving problem identification, solution definition, design and development, prototype demonstration, evaluation, and communication of findings. The tool was tested with an expert survey and an online experiment, showing its effectiveness in promoting critical thinking. The study emphasizes the value of explanations in triggering critical thinking and the importance of user engagement in propaganda detection. The research also addresses challenges in propaganda detection, such as the lack of explainability and transparency in automated labeling, and highlights the potential of LLMs to improve user experience and understanding in propaganda identification. The study concludes that lightweight nudges can significantly improve the accuracy of shared news in online environments and that integrating nudges in digital tools can enhance user engagement and accuracy in combating misinformation.This paper introduces ClarifAI, an automated propaganda detection tool designed to enhance critical thinking by activating System 2 analytical thinking in users. Based on Kahneman's dual-system theory, ClarifAI uses Large Language Models (LLMs) to detect propaganda in news articles and provides context-rich explanations to foster deeper understanding. The tool is designed to nudge users from System 1 (fast, intuitive thinking) to System 2 (slow, analytical thinking), encouraging more critical news consumption. The study demonstrates that ClarifAI effectively promotes critical reading through an online experiment, showing that users who engage with the tool are more likely to critically evaluate news content. The tool's explanations are crucial for fostering critical thinking, as they help users understand the 'why' and 'how' behind detected propaganda. The research also highlights the importance of transparency and user education in propaganda detection. The study contributes to the field by proposing a practical tool and design knowledge for mitigating propaganda in digital news. The tool was developed using the design science research methodology (DSRM), involving problem identification, solution definition, design and development, prototype demonstration, evaluation, and communication of findings. The tool was tested with an expert survey and an online experiment, showing its effectiveness in promoting critical thinking. The study emphasizes the value of explanations in triggering critical thinking and the importance of user engagement in propaganda detection. The research also addresses challenges in propaganda detection, such as the lack of explainability and transparency in automated labeling, and highlights the potential of LLMs to improve user experience and understanding in propaganda identification. The study concludes that lightweight nudges can significantly improve the accuracy of shared news in online environments and that integrating nudges in digital tools can enhance user engagement and accuracy in combating misinformation.