GRAPHSAINT: GRAPH SAMPLING BASED INDUCTIVE LEARNING METHOD

GRAPHSAINT: GRAPH SAMPLING BASED INDUCTIVE LEARNING METHOD

16 Feb 2020 | Hangqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
GraphSAINT is a graph sampling-based inductive learning method designed to improve the training efficiency and accuracy of Graph Convolutional Networks (GCNs) on large graphs. Unlike existing methods that sample nodes or edges across GCN layers, GraphSAINT samples the training graph first and then constructs a complete GCN on the subgraph. This approach ensures a fixed number of well-connected nodes in all layers, effectively addressing the "neighbor explosion" problem during minibatch training. The method introduces normalization techniques to eliminate bias and sampling algorithms to reduce variance. It also decouples sampling from forward and backward propagation, enabling the integration of various GCN architectures, such as graph attention and jumping connections. GraphSAINT achieves superior performance in both accuracy and training time on five large graphs, with new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). The method demonstrates significant performance gains in training accuracy and time, and its flexibility is shown by integrating it with popular GCN architectures like JK-net and GAT. The proposed method addresses challenges in scalability, bias, and variance, and provides efficient and effective training for deep GCNs.GraphSAINT is a graph sampling-based inductive learning method designed to improve the training efficiency and accuracy of Graph Convolutional Networks (GCNs) on large graphs. Unlike existing methods that sample nodes or edges across GCN layers, GraphSAINT samples the training graph first and then constructs a complete GCN on the subgraph. This approach ensures a fixed number of well-connected nodes in all layers, effectively addressing the "neighbor explosion" problem during minibatch training. The method introduces normalization techniques to eliminate bias and sampling algorithms to reduce variance. It also decouples sampling from forward and backward propagation, enabling the integration of various GCN architectures, such as graph attention and jumping connections. GraphSAINT achieves superior performance in both accuracy and training time on five large graphs, with new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). The method demonstrates significant performance gains in training accuracy and time, and its flexibility is shown by integrating it with popular GCN architectures like JK-net and GAT. The proposed method addresses challenges in scalability, bias, and variance, and provides efficient and effective training for deep GCNs.
Reach us at info@study.space
[slides and audio] GraphSAINT%3A Graph Sampling Based Inductive Learning Method