Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations

3 Apr 2021 | Yuning You1*, Tianlong Chen2*, Yongduo Sui3, Ting Chen4, Zhangyang Wang2, Yang Shen1
This paper proposes a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. The framework addresses the challenge of data heterogeneity in graphs by designing four types of graph data augmentations, each imposing certain prior over graph data and parameterized for the extent and pattern. These augmentations are used to obtain correlated views, enabling the proposal of a novel graph contrastive learning framework for GNN pre-training. The framework learns representations invariant to specialized perturbations, achieving state-of-the-art performance in semi-supervised learning, unsupervised representation learning, and transfer learning. It also boosts robustness against common adversarial attacks. The study systematically assesses the impact of different augmentation combinations on various datasets, revealing the rationale behind the performance gains and providing guidance for adopting the framework for specific datasets. The results show that even without tuning augmentation extents or using sophisticated GNN architectures, GraphCL can produce graph representations with similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. The framework is shown to perform mutual information maximization and is connected to recently proposed contrastive learning methods, demonstrating that GraphCL can be rewritten as a general framework unifying a broad family of contrastive learning methods on graph-structured data. The paper also discusses the role of data augmentation in graph contrastive learning, highlighting the importance of different augmentation types, their extent, and patterns in achieving better performance on various graph datasets. The study shows that different augmentation combinations can lead to significant performance improvements, especially in dense graphs. The results indicate that GraphCL achieves state-of-the-art performance in graph classification tasks and is robust to adversarial attacks. The framework is evaluated against state-of-the-art methods in semi-supervised, unsupervised, and transfer learning settings, demonstrating its effectiveness and robustness. The paper concludes that the proposed GraphCL framework is a promising approach for pre-training GNNs, offering a general framework that can benefit the effectiveness and efficiency of graph neural networks through model pre-training.This paper proposes a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. The framework addresses the challenge of data heterogeneity in graphs by designing four types of graph data augmentations, each imposing certain prior over graph data and parameterized for the extent and pattern. These augmentations are used to obtain correlated views, enabling the proposal of a novel graph contrastive learning framework for GNN pre-training. The framework learns representations invariant to specialized perturbations, achieving state-of-the-art performance in semi-supervised learning, unsupervised representation learning, and transfer learning. It also boosts robustness against common adversarial attacks. The study systematically assesses the impact of different augmentation combinations on various datasets, revealing the rationale behind the performance gains and providing guidance for adopting the framework for specific datasets. The results show that even without tuning augmentation extents or using sophisticated GNN architectures, GraphCL can produce graph representations with similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. The framework is shown to perform mutual information maximization and is connected to recently proposed contrastive learning methods, demonstrating that GraphCL can be rewritten as a general framework unifying a broad family of contrastive learning methods on graph-structured data. The paper also discusses the role of data augmentation in graph contrastive learning, highlighting the importance of different augmentation types, their extent, and patterns in achieving better performance on various graph datasets. The study shows that different augmentation combinations can lead to significant performance improvements, especially in dense graphs. The results indicate that GraphCL achieves state-of-the-art performance in graph classification tasks and is robust to adversarial attacks. The framework is evaluated against state-of-the-art methods in semi-supervised, unsupervised, and transfer learning settings, demonstrating its effectiveness and robustness. The paper concludes that the proposed GraphCL framework is a promising approach for pre-training GNNs, offering a general framework that can benefit the effectiveness and efficiency of graph neural networks through model pre-training.
Reach us at info@study.space
[slides] Graph Contrastive Learning with Augmentations | StudySpace