FASTGCN: FAST LEARNING WITH GRAPH CONVOLUTIONAL NETWORKS VIA IMPORTANCE SAMPLING

FASTGCN: FAST LEARNING WITH GRAPH CONVOLUTIONAL NETWORKS VIA IMPORTANCE SAMPLING

30 Jan 2018 | Jie Chen*, Tengfei Ma*, Cao Xiao
FastGCN is a graph convolutional network (GCN) model that improves training efficiency and generalization through importance sampling. Unlike traditional GCN, which requires both training and test data for learning, FastGCN treats graph convolutions as integral transforms under probability measures, enabling batched training without test data. This approach allows for efficient computation by using Monte Carlo methods to estimate integrals, reducing the need for large neighborhood expansions that are computationally expensive in dense graphs. FastGCN also incorporates importance sampling to reduce variance in gradient estimates, leading to faster training and comparable prediction accuracy. Experiments show that FastGCN is significantly faster than GCN and GraphSAGE, with training times up to an order of magnitude faster while maintaining similar performance. The method is particularly effective for large, dense graphs and can be applied to tasks such as node classification and link prediction. FastGCN's key innovation lies in its ability to handle inductive learning by sampling vertices rather than neighbors, which reduces computational overhead and improves scalability. The model's performance is validated across multiple benchmark datasets, demonstrating its effectiveness in real-world applications.FastGCN is a graph convolutional network (GCN) model that improves training efficiency and generalization through importance sampling. Unlike traditional GCN, which requires both training and test data for learning, FastGCN treats graph convolutions as integral transforms under probability measures, enabling batched training without test data. This approach allows for efficient computation by using Monte Carlo methods to estimate integrals, reducing the need for large neighborhood expansions that are computationally expensive in dense graphs. FastGCN also incorporates importance sampling to reduce variance in gradient estimates, leading to faster training and comparable prediction accuracy. Experiments show that FastGCN is significantly faster than GCN and GraphSAGE, with training times up to an order of magnitude faster while maintaining similar performance. The method is particularly effective for large, dense graphs and can be applied to tasks such as node classification and link prediction. FastGCN's key innovation lies in its ability to handle inductive learning by sampling vertices rather than neighbors, which reduces computational overhead and improves scalability. The model's performance is validated across multiple benchmark datasets, demonstrating its effectiveness in real-world applications.
Reach us at info@study.space