21 Jun 2024 | Mucong Ding, Tahseen Rabbani, Bang An, Evan Z. Wang, Furong Huang
Sketch-GNN is a novel framework for training Graph Neural Networks (GNNs) with sublinear time and memory complexity with respect to the graph size. The approach leverages sketching techniques to approximate the nonlinear activations and graph convolution matrices in GNNs, using polynomial tensor sketch (PTS) theory. This method avoids the linear dependence on graph size, which is a significant challenge in GNN training, especially for large graphs. Additionally, Sketch-GNN introduces learnable locality-sensitive hashing (LSH) to improve the quality of sketches during training. Experiments on various benchmarks demonstrate the scalability and competitive performance of Sketch-GNN compared to full-size GNNs, making it a promising solution for large-scale graph learning tasks.Sketch-GNN is a novel framework for training Graph Neural Networks (GNNs) with sublinear time and memory complexity with respect to the graph size. The approach leverages sketching techniques to approximate the nonlinear activations and graph convolution matrices in GNNs, using polynomial tensor sketch (PTS) theory. This method avoids the linear dependence on graph size, which is a significant challenge in GNN training, especially for large graphs. Additionally, Sketch-GNN introduces learnable locality-sensitive hashing (LSH) to improve the quality of sketches during training. Experiments on various benchmarks demonstrate the scalability and competitive performance of Sketch-GNN compared to full-size GNNs, making it a promising solution for large-scale graph learning tasks.