20 Feb 2019 | Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec
This paper introduces DIFFPOOL, a differentiable graph pooling module that enables the construction of deep, multi-layer graph neural networks (GNNs) by hierarchically pooling graph nodes. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. The key idea is to use a differentiable pooling strategy that allows for hierarchical representation learning of graphs. DIFFPOOL is designed to be compatible with various GNN architectures and can be combined with existing GNN methods to improve performance on graph classification tasks. The module learns a cluster assignment matrix over the nodes using the output of a GNN model, enabling the creation of a hierarchical structure of the graph. The module is applied in a deep GNN architecture, where each layer of DIFFPOOL coarsens the input graph more and more, and the final output representation is used for graph classification. The results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, achieving a new state-of-the-art on four out of five benchmark data sets. DIFFPOOL is also shown to learn interpretable hierarchical clusters that correspond to well-defined communities in the input graphs. The method is evaluated against several state-of-the-art graph classification approaches, and the results demonstrate that DIFFPOOL significantly improves performance on various graph classification tasks. The method is also applied to other GNN architectures, such as STRUCTURE2VEC, to show its general applicability. The results indicate that DIFFPOOL is a general strategy for pooling over hierarchical structure that can benefit different GNN architectures. The method is also shown to be efficient in terms of running time, as it reduces the size of graphs by extracting a coarser representation, which speeds up the graph convolution operation in the next layer. The paper concludes that DIFFPOOL is a promising approach for learning hierarchical graph representations and achieving state-of-the-art results on graph classification tasks.This paper introduces DIFFPOOL, a differentiable graph pooling module that enables the construction of deep, multi-layer graph neural networks (GNNs) by hierarchically pooling graph nodes. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. The key idea is to use a differentiable pooling strategy that allows for hierarchical representation learning of graphs. DIFFPOOL is designed to be compatible with various GNN architectures and can be combined with existing GNN methods to improve performance on graph classification tasks. The module learns a cluster assignment matrix over the nodes using the output of a GNN model, enabling the creation of a hierarchical structure of the graph. The module is applied in a deep GNN architecture, where each layer of DIFFPOOL coarsens the input graph more and more, and the final output representation is used for graph classification. The results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, achieving a new state-of-the-art on four out of five benchmark data sets. DIFFPOOL is also shown to learn interpretable hierarchical clusters that correspond to well-defined communities in the input graphs. The method is evaluated against several state-of-the-art graph classification approaches, and the results demonstrate that DIFFPOOL significantly improves performance on various graph classification tasks. The method is also applied to other GNN architectures, such as STRUCTURE2VEC, to show its general applicability. The results indicate that DIFFPOOL is a general strategy for pooling over hierarchical structure that can benefit different GNN architectures. The method is also shown to be efficient in terms of running time, as it reduces the size of graphs by extracting a coarser representation, which speeds up the graph convolution operation in the next layer. The paper concludes that DIFFPOOL is a promising approach for learning hierarchical graph representations and achieving state-of-the-art results on graph classification tasks.