August 19-23, 2018 | Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, Jing Gao
This paper proposes an end-to-end framework called Event Adversarial Neural Networks (EANN) for multi-modal fake news detection. The main challenge in fake news detection on social media is identifying fake news on newly emerged events, which is difficult for existing methods due to their reliance on event-specific features that cannot be transferred to unseen events. EANN addresses this by learning event-invariant features that are shared across events, enabling effective detection of fake news on new events.
EANN consists of three main components: a multi-modal feature extractor, a fake news detector, and an event discriminator. The multi-modal feature extractor extracts textual and visual features from posts, which are then used by the fake news detector to identify fake news. The event discriminator is trained to remove event-specific features and retain shared features among events. This adversarial setup allows the model to learn transferable feature representations that are invariant to specific events.
Extensive experiments on two large-scale social media datasets (Twitter and Weibo) show that EANN outperforms state-of-the-art methods in terms of accuracy, precision, and F1 score. The model's ability to learn event-invariant features enables it to detect fake news on new events effectively. The event discriminator plays a crucial role in this process by removing event-specific features and enhancing the model's generalization ability.
The proposed EANN model is a general framework for fake news detection, with the multi-modal feature extractor being adaptable to different feature extraction models. The integration of adversarial learning with multi-modal features allows the model to capture both textual and visual information, improving the detection of fake news. The results demonstrate that EANN is effective in identifying fake news, particularly when combined with the event discriminator, which enhances the model's performance significantly.This paper proposes an end-to-end framework called Event Adversarial Neural Networks (EANN) for multi-modal fake news detection. The main challenge in fake news detection on social media is identifying fake news on newly emerged events, which is difficult for existing methods due to their reliance on event-specific features that cannot be transferred to unseen events. EANN addresses this by learning event-invariant features that are shared across events, enabling effective detection of fake news on new events.
EANN consists of three main components: a multi-modal feature extractor, a fake news detector, and an event discriminator. The multi-modal feature extractor extracts textual and visual features from posts, which are then used by the fake news detector to identify fake news. The event discriminator is trained to remove event-specific features and retain shared features among events. This adversarial setup allows the model to learn transferable feature representations that are invariant to specific events.
Extensive experiments on two large-scale social media datasets (Twitter and Weibo) show that EANN outperforms state-of-the-art methods in terms of accuracy, precision, and F1 score. The model's ability to learn event-invariant features enables it to detect fake news on new events effectively. The event discriminator plays a crucial role in this process by removing event-specific features and enhancing the model's generalization ability.
The proposed EANN model is a general framework for fake news detection, with the multi-modal feature extractor being adaptable to different feature extraction models. The integration of adversarial learning with multi-modal features allows the model to capture both textual and visual information, improving the detection of fake news. The results demonstrate that EANN is effective in identifying fake news, particularly when combined with the event discriminator, which enhances the model's performance significantly.