2020 | Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun
Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between nodes. Recent variants such as graph convolutional networks (GCN), graph attention networks (GAT), and graph recurrent networks (GRN) have demonstrated significant performance in various deep learning tasks. This survey provides a general design pipeline for GNN models, discusses variants of each component, systematically categorizes applications, and proposes four open problems for future research. GNNs are widely used in tasks requiring graph data, such as modeling physics systems, learning molecular fingerprints, predicting protein interfaces, and classifying diseases. They are also applied in non-structural data like texts and images, where reasoning on extracted structures (e.g., dependency trees and scene graphs) is important. The design pipeline includes finding graph structure, specifying graph type and scale, designing loss functions, and building models using computational modules. Key components include propagation modules (e.g., convolution operators, recurrent operators, and attention-based operators), sampling modules, and pooling modules. The survey also discusses various GNN variants, including spectral methods, spatial methods, and attention-based methods, and highlights their applications in different domains. The paper concludes with four open problems for future research in GNNs.Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between nodes. Recent variants such as graph convolutional networks (GCN), graph attention networks (GAT), and graph recurrent networks (GRN) have demonstrated significant performance in various deep learning tasks. This survey provides a general design pipeline for GNN models, discusses variants of each component, systematically categorizes applications, and proposes four open problems for future research. GNNs are widely used in tasks requiring graph data, such as modeling physics systems, learning molecular fingerprints, predicting protein interfaces, and classifying diseases. They are also applied in non-structural data like texts and images, where reasoning on extracted structures (e.g., dependency trees and scene graphs) is important. The design pipeline includes finding graph structure, specifying graph type and scale, designing loss functions, and building models using computational modules. Key components include propagation modules (e.g., convolution operators, recurrent operators, and attention-based operators), sampling modules, and pooling modules. The survey also discusses various GNN variants, including spectral methods, spatial methods, and attention-based methods, and highlights their applications in different domains. The paper concludes with four open problems for future research in GNNs.