2024 | Sofiane Ennadir, Yassine Abbahaddou, Johannes F. Lutzeyer, Michalis Vazirgiannis, Henrik Boström
This paper introduces NoisyGNNs, a novel defense method for Graph Neural Networks (GNNs) that incorporates noise into the model's architecture to enhance robustness against adversarial perturbations. The authors establish a theoretical connection between noise injection and GNN robustness, demonstrating the effectiveness of their approach through extensive empirical evaluations on node classification tasks using popular GNNs like GCN and GIN. The proposed method achieves superior or comparable defense performance to existing methods while minimizing added time complexity and maintaining model performance on clean graphs. The approach is model-agnostic and can be integrated with different GNN architectures, showing promising results when combined with other defense techniques. The code for NoisyGNN is publicly available on GitHub.This paper introduces NoisyGNNs, a novel defense method for Graph Neural Networks (GNNs) that incorporates noise into the model's architecture to enhance robustness against adversarial perturbations. The authors establish a theoretical connection between noise injection and GNN robustness, demonstrating the effectiveness of their approach through extensive empirical evaluations on node classification tasks using popular GNNs like GCN and GIN. The proposed method achieves superior or comparable defense performance to existing methods while minimizing added time complexity and maintaining model performance on clean graphs. The approach is model-agnostic and can be integrated with different GNN architectures, showing promising results when combined with other defense techniques. The code for NoisyGNN is publicly available on GitHub.