The FIGNEWS Shared Task on News Media Narratives

The FIGNEWS Shared Task on News Media Narratives

25 Jul 2024 | Wajdi Zaghouani, Mustafa Jarrar, Nizar Habash, Houda Bouamor, Imed Zitouni, Mona Diab, Samhaa R. El-Beltagy, Muhammed AbuOdeh
The FIGNEWS shared task, organized as part of the ArabicNLP 2024 conference, focuses on addressing bias and propaganda in multilingual news posts, particularly in the context of the early days of the Israel War on Gaza. The task aims to foster collaboration in developing annotation guidelines for subjective tasks by creating frameworks for analyzing diverse narratives. Participants were invited to work on two subtasks: Bias Annotation and Propaganda Annotation, with four evaluation tracks: guidelines development, annotation quality, annotation quantity, and consistency. A total of 17 teams participated, producing 129,800 data points. The task highlights the importance of clear guidelines, examples, and collaboration in advancing NLP research on complex, subjective, and sensitive opinion analysis tasks. The resulting dataset and insights contribute valuable resources and direction for future work in this area. The study also discusses the label distribution patterns for Bias and Propaganda, noting that the bias labeling task is challenging due to high inter-annotator agreement (IAA) difficulty, while the propaganda labeling task is similarly demanding. The findings provide valuable insights for improving media literacy and fostering a more informed and critically engaged public.The FIGNEWS shared task, organized as part of the ArabicNLP 2024 conference, focuses on addressing bias and propaganda in multilingual news posts, particularly in the context of the early days of the Israel War on Gaza. The task aims to foster collaboration in developing annotation guidelines for subjective tasks by creating frameworks for analyzing diverse narratives. Participants were invited to work on two subtasks: Bias Annotation and Propaganda Annotation, with four evaluation tracks: guidelines development, annotation quality, annotation quantity, and consistency. A total of 17 teams participated, producing 129,800 data points. The task highlights the importance of clear guidelines, examples, and collaboration in advancing NLP research on complex, subjective, and sensitive opinion analysis tasks. The resulting dataset and insights contribute valuable resources and direction for future work in this area. The study also discusses the label distribution patterns for Bias and Propaganda, noting that the bias labeling task is challenging due to high inter-annotator agreement (IAA) difficulty, while the propaganda labeling task is similarly demanding. The findings provide valuable insights for improving media literacy and fostering a more informed and critically engaged public.
Reach us at info@study.space