August 19-23, 2018, London, United Kingdom | Yaqing Wang1, Fenglong Ma1, Zhiwei Jin2, Ye Yuan3 Guangxu Xun1, Kishlay Jha1, Lu Su1, Jing Gao1
The paper introduces Event Adversarial Neural Networks (EANN), a novel framework for multi-modal fake news detection. EANN addresses the challenge of identifying fake news on newly emerged events by learning event-invariant features. The framework consists of three main components: a multi-modal feature extractor, a fake news detector, and an event discriminator. The multi-modal feature extractor extracts textual and visual features from posts, while the fake news detector and event discriminator work together to learn discriminative and event-invariant representations, respectively. The event discriminator removes event-specific features, ensuring that the learned representations are transferable across different events. Extensive experiments on datasets from Weibo and Twitter show that EANN outperforms state-of-the-art methods, demonstrating its effectiveness in detecting fake news on new and time-critical events.The paper introduces Event Adversarial Neural Networks (EANN), a novel framework for multi-modal fake news detection. EANN addresses the challenge of identifying fake news on newly emerged events by learning event-invariant features. The framework consists of three main components: a multi-modal feature extractor, a fake news detector, and an event discriminator. The multi-modal feature extractor extracts textual and visual features from posts, while the fake news detector and event discriminator work together to learn discriminative and event-invariant representations, respectively. The event discriminator removes event-specific features, ensuring that the learned representations are transferable across different events. Extensive experiments on datasets from Weibo and Twitter show that EANN outperforms state-of-the-art methods, demonstrating its effectiveness in detecting fake news on new and time-critical events.