Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification

Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification

13 Jun 2024 | Yuankai Luo, Lei Shi, Xiao-Ming Wu
This paper reevaluates the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against Graph Transformers (GTs) in node classification tasks. The study shows that with proper hyperparameter tuning, these classic GNNs can achieve state-of-the-art performance, often matching or exceeding that of recent GTs across 17 out of 18 datasets. The research highlights the importance of hyperparameters such as normalization, dropout, residual connections, network depth, and jumping knowledge mode in improving GNN performance. The findings suggest that the previously reported superiority of GTs over GNNs may have been overstated due to suboptimal hyperparameter configurations in GNN evaluations. The study also demonstrates that classic GNNs perform well on both homophilous and heterophilous graphs, and that their performance can be significantly enhanced with appropriate configurations. The results indicate that message passing remains effective for learning node representations on large-scale graphs, and that current GTs have not yet addressed GNN issues such as over-smoothing and long-range dependencies. The study encourages more accurate comparisons and evaluations of model capabilities in graph machine learning.This paper reevaluates the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against Graph Transformers (GTs) in node classification tasks. The study shows that with proper hyperparameter tuning, these classic GNNs can achieve state-of-the-art performance, often matching or exceeding that of recent GTs across 17 out of 18 datasets. The research highlights the importance of hyperparameters such as normalization, dropout, residual connections, network depth, and jumping knowledge mode in improving GNN performance. The findings suggest that the previously reported superiority of GTs over GNNs may have been overstated due to suboptimal hyperparameter configurations in GNN evaluations. The study also demonstrates that classic GNNs perform well on both homophilous and heterophilous graphs, and that their performance can be significantly enhanced with appropriate configurations. The results indicate that message passing remains effective for learning node representations on large-scale graphs, and that current GTs have not yet addressed GNN issues such as over-smoothing and long-range dependencies. The study encourages more accurate comparisons and evaluations of model capabilities in graph machine learning.
Reach us at info@study.space