A Simple and Yet Fairly Effective Defense for Graph Neural Networks

A Simple and Yet Fairly Effective Defense for Graph Neural Networks

21 Feb 2024 | Sofiane Ennadir¹, Yassine Abbahaddou², Johannes F. Lutzeyer², Michalis Vazirgiannis¹,², Henrik Boström¹
This paper introduces NoisyGNN, a novel defense method for Graph Neural Networks (GNNs) that enhances robustness by injecting noise into the model's architecture. The authors theoretically connect noise injection to improved GNN robustness and validate their findings through extensive empirical evaluations on node classification tasks using GCN and GIN. NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The approach is model-agnostic, allowing integration with various GNN architectures. Combining NoisyGNN with existing defense techniques further improves adversarial defense results. The method is effective against both structural and node feature-based adversarial attacks, as demonstrated through experiments on real-world benchmark datasets. NoisyGNN's theoretical guarantees and reduced complexity make it a promising solution for enhancing GNN robustness. The study shows that NoisyGNN maintains model performance on clean graphs and provides strong defense against adversarial attacks, making it a valuable addition to the field of GNN security.This paper introduces NoisyGNN, a novel defense method for Graph Neural Networks (GNNs) that enhances robustness by injecting noise into the model's architecture. The authors theoretically connect noise injection to improved GNN robustness and validate their findings through extensive empirical evaluations on node classification tasks using GCN and GIN. NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The approach is model-agnostic, allowing integration with various GNN architectures. Combining NoisyGNN with existing defense techniques further improves adversarial defense results. The method is effective against both structural and node feature-based adversarial attacks, as demonstrated through experiments on real-world benchmark datasets. NoisyGNN's theoretical guarantees and reduced complexity make it a promising solution for enhancing GNN robustness. The study shows that NoisyGNN maintains model performance on clean graphs and provides strong defense against adversarial attacks, making it a valuable addition to the field of GNN security.
Reach us at info@study.space