February 2024 | GUIXIAN ZHANG, SHICHAO ZHANG, GUAN YUAN
This paper addresses the critical task of detecting misinformation on social media, which has significant impacts on public safety and government operations. The authors propose an efficient model using self-supervised contrastive learning, focusing on a Bayesian Graph Local Extrema Convolution (BLC) approach to aggregate node features in graph structures. BLC considers unreliable relationships and uncertainties in propagation structures, emphasizing attribute differences between nodes and their neighbors. Additionally, a new long-tail strategy is introduced to match long-tail users with the global social network, avoiding over-concentration on high-degree nodes. The model is evaluated on two public Twitter datasets, demonstrating superior performance in misinformation detection compared to existing methods. Ablation experiments and robustness tests further validate the effectiveness and robustness of the proposed model. The study also explores the model's application in bot detection, achieving advanced results. Overall, the paper contributes to the field by providing a robust misinformation detection framework that leverages Bayesian methods and contrastive learning.This paper addresses the critical task of detecting misinformation on social media, which has significant impacts on public safety and government operations. The authors propose an efficient model using self-supervised contrastive learning, focusing on a Bayesian Graph Local Extrema Convolution (BLC) approach to aggregate node features in graph structures. BLC considers unreliable relationships and uncertainties in propagation structures, emphasizing attribute differences between nodes and their neighbors. Additionally, a new long-tail strategy is introduced to match long-tail users with the global social network, avoiding over-concentration on high-degree nodes. The model is evaluated on two public Twitter datasets, demonstrating superior performance in misinformation detection compared to existing methods. Ablation experiments and robustness tests further validate the effectiveness and robustness of the proposed model. The study also explores the model's application in bot detection, achieving advanced results. Overall, the paper contributes to the field by providing a robust misinformation detection framework that leverages Bayesian methods and contrastive learning.