Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification

Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification

13 Jun 2024 | Yuankai Luo, Lei Shi, Xiao-Ming Wu
This paper reevaluates the performance of classic Graph Neural Networks (GNNs) against Graph Transformers (GTs) in node classification tasks. The authors conduct a thorough empirical analysis using three classic GNN models—GCN, GAT, and GraphSAGE—and compare them with state-of-the-art GTs on 18 diverse datasets, including homophilous, heterophilous, and large-scale graphs. Their findings suggest that the previously reported superiority of GTs may have been overstated due to suboptimal hyperparameter configurations in GNNs. With slight hyperparameter tuning, the classic GNN models achieve state-of-the-art performance, matching or even surpassing GTs on 17 out of 18 datasets. The study also includes detailed ablation studies to investigate the impact of various GNN configurations, such as normalization, dropout, residual connections, network depth, and jumping knowledge mode. The results highlight the importance of these hyperparameters and provide insights into how they influence node classification performance. The authors aim to promote more rigorous empirical evaluations in graph machine learning research.This paper reevaluates the performance of classic Graph Neural Networks (GNNs) against Graph Transformers (GTs) in node classification tasks. The authors conduct a thorough empirical analysis using three classic GNN models—GCN, GAT, and GraphSAGE—and compare them with state-of-the-art GTs on 18 diverse datasets, including homophilous, heterophilous, and large-scale graphs. Their findings suggest that the previously reported superiority of GTs may have been overstated due to suboptimal hyperparameter configurations in GNNs. With slight hyperparameter tuning, the classic GNN models achieve state-of-the-art performance, matching or even surpassing GTs on 17 out of 18 datasets. The study also includes detailed ablation studies to investigate the impact of various GNN configurations, such as normalization, dropout, residual connections, network depth, and jumping knowledge mode. The results highlight the importance of these hyperparameters and provide insights into how they influence node classification performance. The authors aim to promote more rigorous empirical evaluations in graph machine learning research.
Reach us at info@study.space