The paper introduces FastGCN, an efficient method for training graph convolutional networks (GCNs) on large, dense graphs. GCNs, proposed by Kipf and Welling, are effective for semi-supervised learning but suffer from the need for both training and test data and high computational costs due to recursive neighborhood expansion. FastGCN addresses these issues by interpreting graph convolutions as integral transforms of embedding functions under probability measures, allowing for Monte Carlo estimation of the integrals. This leads to a batched training scheme that separates training and test data, making it inductive. Importance sampling is used to reduce variance in the Monte Carlo estimates, further improving efficiency. Experimental results show that FastGCN is significantly faster than the original GCN and GraphSAGE, while maintaining comparable prediction accuracy. The method is particularly beneficial for large, dense graphs, demonstrating orders of magnitude improvement in training time.The paper introduces FastGCN, an efficient method for training graph convolutional networks (GCNs) on large, dense graphs. GCNs, proposed by Kipf and Welling, are effective for semi-supervised learning but suffer from the need for both training and test data and high computational costs due to recursive neighborhood expansion. FastGCN addresses these issues by interpreting graph convolutions as integral transforms of embedding functions under probability measures, allowing for Monte Carlo estimation of the integrals. This leads to a batched training scheme that separates training and test data, making it inductive. Importance sampling is used to reduce variance in the Monte Carlo estimates, further improving efficiency. Experimental results show that FastGCN is significantly faster than the original GCN and GraphSAGE, while maintaining comparable prediction accuracy. The method is particularly beneficial for large, dense graphs, demonstrating orders of magnitude improvement in training time.