GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

August 23–27, 2020 | Jiezong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang
Graph Contrastive Coding (GCC) is a self-supervised framework designed to pre-train graph neural networks (GNNs) on multiple graph datasets, aiming to capture universal and transferable network topological properties. Inspired by recent advances in pre-training from natural language processing and computer vision, GCC uses subgraph instance discrimination as its pre-training task, leveraging contrastive learning to enable GNNs to learn intrinsic and transferable structural representations. The pre-trained GCC model is then applied to downstream tasks on unseen graphs, demonstrating competitive or superior performance compared to task-specific models trained from scratch. Extensive experiments on three graph learning tasks and ten graph datasets validate the effectiveness and transferability of GCC, highlighting its potential for graph representation learning. Key contributions include formalizing the GNN pre-training problem, designing the GCC framework, and demonstrating its superior performance in out-of-domain tasks.Graph Contrastive Coding (GCC) is a self-supervised framework designed to pre-train graph neural networks (GNNs) on multiple graph datasets, aiming to capture universal and transferable network topological properties. Inspired by recent advances in pre-training from natural language processing and computer vision, GCC uses subgraph instance discrimination as its pre-training task, leveraging contrastive learning to enable GNNs to learn intrinsic and transferable structural representations. The pre-trained GCC model is then applied to downstream tasks on unseen graphs, demonstrating competitive or superior performance compared to task-specific models trained from scratch. Extensive experiments on three graph learning tasks and ten graph datasets validate the effectiveness and transferability of GCC, highlighting its potential for graph representation learning. Key contributions include formalizing the GNN pre-training problem, designing the GCC framework, and demonstrating its superior performance in out-of-domain tasks.
Reach us at info@study.space