Self-Attention Graph Pooling

Self-Attention Graph Pooling

13 Jun 2019 | Junhyun Lee * 1 Inyeop Lee * 1 Jaewoo Kang 1
This paper proposes SAGPool, a self-attention based graph pooling method for graph neural networks (GNNs) in hierarchical graph pooling. The method uses self-attention to distinguish between nodes that should be dropped and those that should be retained, considering both node features and graph topology. SAGPool achieves superior graph classification performance on benchmark datasets with a reasonable number of parameters. The method is compared with existing pooling methods, including Set2Set, SortPool, DiffPool, and gPool. The results show that SAGPool outperforms these methods on various datasets, especially D&D and PROTEINS. SAGPool is also compared with variations that consider two-hop connections and average attention scores from multiple GNNs. The method is efficient, with a storage complexity of O(|V| + |E|), and is suitable for graphs of varying sizes and structures. The paper also discusses the advantages of SAGPool over other methods, including its ability to handle different graph sizes and its use of self-attention to capture graph topology. The results demonstrate that SAGPool has the potential to improve performance on graph classification tasks.This paper proposes SAGPool, a self-attention based graph pooling method for graph neural networks (GNNs) in hierarchical graph pooling. The method uses self-attention to distinguish between nodes that should be dropped and those that should be retained, considering both node features and graph topology. SAGPool achieves superior graph classification performance on benchmark datasets with a reasonable number of parameters. The method is compared with existing pooling methods, including Set2Set, SortPool, DiffPool, and gPool. The results show that SAGPool outperforms these methods on various datasets, especially D&D and PROTEINS. SAGPool is also compared with variations that consider two-hop connections and average attention scores from multiple GNNs. The method is efficient, with a storage complexity of O(|V| + |E|), and is suitable for graphs of varying sizes and structures. The paper also discusses the advantages of SAGPool over other methods, including its ability to handle different graph sizes and its use of self-attention to capture graph topology. The results demonstrate that SAGPool has the potential to improve performance on graph classification tasks.
Reach us at info@study.space