16 Feb 2020 | Hangqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna
GraphSAINT is a graph sampling-based inductive learning method for training deep Graph Convolutional Networks (GCNs) on large graphs. Unlike traditional layer sampling techniques that sample nodes or edges across GCN layers, GraphSAINT constructs minibatches by sampling the entire training graph, ensuring a fixed number of well-connected nodes in each layer. This approach addresses the "neighbor explosion" problem and improves training efficiency and accuracy. The method includes normalization techniques to eliminate bias and sampling algorithms to reduce variance. GraphSAINT demonstrates superior performance on five large datasets, achieving new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). It also integrates well with various GCN architectures, such as JK-net and GAT, and can be extended to handle higher-order graph convolutional layers and more complex models.GraphSAINT is a graph sampling-based inductive learning method for training deep Graph Convolutional Networks (GCNs) on large graphs. Unlike traditional layer sampling techniques that sample nodes or edges across GCN layers, GraphSAINT constructs minibatches by sampling the entire training graph, ensuring a fixed number of well-connected nodes in each layer. This approach addresses the "neighbor explosion" problem and improves training efficiency and accuracy. The method includes normalization techniques to eliminate bias and sampling algorithms to reduce variance. GraphSAINT demonstrates superior performance on five large datasets, achieving new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970). It also integrates well with various GCN architectures, such as JK-net and GAT, and can be extended to handle higher-order graph convolutional layers and more complex models.