9 Dec 2021 | Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
This paper investigates the robustness of deep learning models for graph data to adversarial attacks, focusing on models that use graph convolutions. The study introduces the first adversarial attack on attributed graphs, targeting both node features and graph structure while ensuring the perturbations remain unnoticeable. The proposed method, NetTack, is an efficient algorithm that leverages incremental computations to handle the discrete nature of graph data. Experiments demonstrate that node classification accuracy significantly drops with minimal perturbations, and the attacks are transferable to other state-of-the-art models and datasets, even when only limited knowledge about the graph is available. The paper also highlights the challenges and opportunities in designing such attacks, particularly in the context of graph-based learning.This paper investigates the robustness of deep learning models for graph data to adversarial attacks, focusing on models that use graph convolutions. The study introduces the first adversarial attack on attributed graphs, targeting both node features and graph structure while ensuring the perturbations remain unnoticeable. The proposed method, NetTack, is an efficient algorithm that leverages incremental computations to handle the discrete nature of graph data. Experiments demonstrate that node classification accuracy significantly drops with minimal perturbations, and the attacks are transferable to other state-of-the-art models and datasets, even when only limited knowledge about the graph is available. The paper also highlights the challenges and opportunities in designing such attacks, particularly in the context of graph-based learning.