AUGUST 2019 | Zonghan Wu, Shirui Pan, Member, IEEE, Fengwen Chen, Guodong Long, Chengqi Zhang, Senior Member, IEEE, Philip S. Yu, Fellow, IEEE
This paper provides a comprehensive survey of graph neural networks (GNNs) in data mining and machine learning. It introduces a new taxonomy that categorizes GNNs into four main categories: recurrent graph neural networks (RecGNNs), convolutional graph neural networks (ConvGNNs), graph autoencoders (GAEs), and spatial-temporal graph neural networks (STGNNs). The survey covers the background of GNNs, including their history and key concepts, and provides detailed descriptions of representative models within each category. It also discusses the applications of GNNs across various domains and summarizes resources such as open-source codes, benchmark datasets, and model evaluations. Finally, the paper suggests potential future research directions in the field, focusing on model depth, scalability trade-offs, heterogeneity, and dynamicity.This paper provides a comprehensive survey of graph neural networks (GNNs) in data mining and machine learning. It introduces a new taxonomy that categorizes GNNs into four main categories: recurrent graph neural networks (RecGNNs), convolutional graph neural networks (ConvGNNs), graph autoencoders (GAEs), and spatial-temporal graph neural networks (STGNNs). The survey covers the background of GNNs, including their history and key concepts, and provides detailed descriptions of representative models within each category. It also discusses the applications of GNNs across various domains and summarizes resources such as open-source codes, benchmark datasets, and model evaluations. Finally, the paper suggests potential future research directions in the field, focusing on model depth, scalability trade-offs, heterogeneity, and dynamicity.