2019 | Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe
This paper investigates the theoretical relationship between graph neural networks (GNNs) and the 1-dimensional Weisfeiler-Leman (1-WL) graph isomorphism heuristic. The authors show that GNNs, in terms of their expressive power, are equivalent to 1-WL in distinguishing non-isomorphic (sub-)graphs. However, both methods have limitations, such as failing to distinguish certain graphs with different structural properties. To address these limitations, the authors propose $k$-dimensional GNNs ($k$-GNNs), which extend GNNs to capture higher-order graph structures at multiple scales. These higher-order structures are crucial for characterizing social networks and molecule graphs. The experimental results confirm the theoretical findings and demonstrate that higher-order information is beneficial for graph classification and regression tasks. The hierarchical variant of $k$-GNNs, which combines representations learned at different granularities, consistently outperforms traditional GNNs on various benchmarks.This paper investigates the theoretical relationship between graph neural networks (GNNs) and the 1-dimensional Weisfeiler-Leman (1-WL) graph isomorphism heuristic. The authors show that GNNs, in terms of their expressive power, are equivalent to 1-WL in distinguishing non-isomorphic (sub-)graphs. However, both methods have limitations, such as failing to distinguish certain graphs with different structural properties. To address these limitations, the authors propose $k$-dimensional GNNs ($k$-GNNs), which extend GNNs to capture higher-order graph structures at multiple scales. These higher-order structures are crucial for characterizing social networks and molecule graphs. The experimental results confirm the theoretical findings and demonstrate that higher-order information is beneficial for graph classification and regression tasks. The hierarchical variant of $k$-GNNs, which combines representations learned at different granularities, consistently outperforms traditional GNNs on various benchmarks.