Representation Learning on Graphs with Jumping Knowledge Networks

Representation Learning on Graphs with Jumping Knowledge Networks

Stockholm, Sweden, PMLR 80, 2018 | Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka
This paper introduces Jumping Knowledge Networks (JK-Nets) for representation learning on graphs. The key idea is to adaptively select different neighborhood ranges for each node to capture structure-aware representations. Traditional neighborhood aggregation methods, such as Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT), aggregate information from a fixed number of layers, which may not be optimal for graphs with varying subgraph structures. JK-Nets overcome this limitation by allowing each node to selectively combine information from different layers, effectively "jumping" to the most relevant neighborhood ranges. The paper analyzes the influence distribution of nodes, which captures how much each node's representation is affected by other nodes. This distribution is closely related to random walk distributions on the graph, and the effectiveness of neighborhood aggregation depends on the graph's structure. For example, in expander graphs, random walks spread out quickly, leading to broad influence distributions, while in tree-like structures, influence remains localized. JK-Nets address these challenges by using three aggregation strategies: concatenation, max-pooling, and LSTM-attention. These methods allow the model to adaptively select the most informative neighborhood ranges for each node. The paper demonstrates that JK-Nets outperform existing models on various graph datasets, including social, bioinformatics, and citation networks. When combined with models like GCN, GraphSAGE, and GAT, JK-Nets consistently improve their performance. The paper also shows that JK-Nets are particularly effective on large complex graphs with diverse subgraph structures. For example, in a citation network, JK-Nets adapt to different subgraph structures by focusing on local or global information depending on the node's role. In a hub-centric structure, JK-Nets prioritize the node's own features, while in tree-like structures, they focus on local neighborhoods. Overall, the paper highlights the importance of adaptive neighborhood aggregation in graph representation learning and demonstrates that JK-Nets provide a flexible and effective solution for capturing structure-aware representations on graphs.This paper introduces Jumping Knowledge Networks (JK-Nets) for representation learning on graphs. The key idea is to adaptively select different neighborhood ranges for each node to capture structure-aware representations. Traditional neighborhood aggregation methods, such as Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT), aggregate information from a fixed number of layers, which may not be optimal for graphs with varying subgraph structures. JK-Nets overcome this limitation by allowing each node to selectively combine information from different layers, effectively "jumping" to the most relevant neighborhood ranges. The paper analyzes the influence distribution of nodes, which captures how much each node's representation is affected by other nodes. This distribution is closely related to random walk distributions on the graph, and the effectiveness of neighborhood aggregation depends on the graph's structure. For example, in expander graphs, random walks spread out quickly, leading to broad influence distributions, while in tree-like structures, influence remains localized. JK-Nets address these challenges by using three aggregation strategies: concatenation, max-pooling, and LSTM-attention. These methods allow the model to adaptively select the most informative neighborhood ranges for each node. The paper demonstrates that JK-Nets outperform existing models on various graph datasets, including social, bioinformatics, and citation networks. When combined with models like GCN, GraphSAGE, and GAT, JK-Nets consistently improve their performance. The paper also shows that JK-Nets are particularly effective on large complex graphs with diverse subgraph structures. For example, in a citation network, JK-Nets adapt to different subgraph structures by focusing on local or global information depending on the node's role. In a hub-centric structure, JK-Nets prioritize the node's own features, while in tree-like structures, they focus on local neighborhoods. Overall, the paper highlights the importance of adaptive neighborhood aggregation in graph representation learning and demonstrates that JK-Nets provide a flexible and effective solution for capturing structure-aware representations on graphs.
Reach us at info@study.space
[slides and audio] Representation Learning on Graphs with Jumping Knowledge Networks