February 2024 | GUIXIAN ZHANG, SHICHAO ZHANG, GUAN YUAN
This paper proposes a Bayesian graph local extrema convolution (BLC) with a long-tail strategy for detecting misinformation on social media. The BLC model aggregates node features in a graph structure, emphasizing attribute differences and structural uncertainties. A long-tail strategy is introduced to avoid over-reliance on high-degree nodes in graph neural networks, improving the effectiveness of existing graph-based methods. The model is evaluated on two Twitter datasets, demonstrating significant improvements in misinformation detection. The BLC model is also tested on three graph datasets, showing robustness against 15% data perturbation. The study also includes bot detection experiments using the TwiBot-20 dataset, where BLC outperforms existing methods. The BLC model is further validated through robustness experiments, showing resilience to adversarial attacks. The paper concludes that BLC effectively captures misinformation patterns and enhances the detection of both misinformation and bots in social networks. The proposed method addresses the challenges of misinformation detection by incorporating uncertainty and long-tail user considerations, leading to improved performance in graph-based models.This paper proposes a Bayesian graph local extrema convolution (BLC) with a long-tail strategy for detecting misinformation on social media. The BLC model aggregates node features in a graph structure, emphasizing attribute differences and structural uncertainties. A long-tail strategy is introduced to avoid over-reliance on high-degree nodes in graph neural networks, improving the effectiveness of existing graph-based methods. The model is evaluated on two Twitter datasets, demonstrating significant improvements in misinformation detection. The BLC model is also tested on three graph datasets, showing robustness against 15% data perturbation. The study also includes bot detection experiments using the TwiBot-20 dataset, where BLC outperforms existing methods. The BLC model is further validated through robustness experiments, showing resilience to adversarial attacks. The paper concludes that BLC effectively captures misinformation patterns and enhances the detection of both misinformation and bots in social networks. The proposed method addresses the challenges of misinformation detection by incorporating uncertainty and long-tail user considerations, leading to improved performance in graph-based models.