An End-to-End Deep Learning Architecture for Graph Classification

An End-to-End Deep Learning Architecture for Graph Classification

2018 | Muhan Zhang, Zhicheng Cui, Marion Neumann, Yixin Chen
This paper proposes a novel end-to-end deep learning architecture, DGCNN, for graph classification. The main challenges in graph classification are extracting useful features from graph structures and sequentially reading graphs in a meaningful order. To address these challenges, the authors design a localized graph convolution model and a novel SortPooling layer. The SortPooling layer sorts graph vertices in a consistent order, enabling traditional neural networks to process graph data directly. Experiments on benchmark datasets show that DGCNN achieves highly competitive performance with state-of-the-art graph kernels and other graph neural network methods. DGCNN allows end-to-end gradient-based training with original graphs without requiring preprocessing. The architecture is based on graph convolution layers that extract multi-scale vertex features and a SortPooling layer that sorts vertex features instead of summing them. This enables learning from the global graph topology. The DGCNN architecture consists of three stages: graph convolution layers, a SortPooling layer, and traditional convolutional and dense layers. The graph convolution layers extract local substructure features and define a consistent vertex ordering. The SortPooling layer sorts the vertex features under the previously defined order and unifies input sizes. Traditional convolutional and dense layers read the sorted graph representations and make predictions. The paper also discusses the connection between the proposed graph convolution and two popular graph kernels, the Weisfeiler-Lehman subtree kernel and the propagation kernel. The SortPooling layer is shown to be effective in sorting vertex features and enabling end-to-end training. The paper concludes that DGCNN achieves better performance than existing methods on many benchmark datasets and provides a unified way to integrate preprocessing into a neural network structure.This paper proposes a novel end-to-end deep learning architecture, DGCNN, for graph classification. The main challenges in graph classification are extracting useful features from graph structures and sequentially reading graphs in a meaningful order. To address these challenges, the authors design a localized graph convolution model and a novel SortPooling layer. The SortPooling layer sorts graph vertices in a consistent order, enabling traditional neural networks to process graph data directly. Experiments on benchmark datasets show that DGCNN achieves highly competitive performance with state-of-the-art graph kernels and other graph neural network methods. DGCNN allows end-to-end gradient-based training with original graphs without requiring preprocessing. The architecture is based on graph convolution layers that extract multi-scale vertex features and a SortPooling layer that sorts vertex features instead of summing them. This enables learning from the global graph topology. The DGCNN architecture consists of three stages: graph convolution layers, a SortPooling layer, and traditional convolutional and dense layers. The graph convolution layers extract local substructure features and define a consistent vertex ordering. The SortPooling layer sorts the vertex features under the previously defined order and unifies input sizes. Traditional convolutional and dense layers read the sorted graph representations and make predictions. The paper also discusses the connection between the proposed graph convolution and two popular graph kernels, the Weisfeiler-Lehman subtree kernel and the propagation kernel. The SortPooling layer is shown to be effective in sorting vertex features and enabling end-to-end training. The paper concludes that DGCNN achieves better performance than existing methods on many benchmark datasets and provides a unified way to integrate preprocessing into a neural network structure.
Reach us at info@study.space