Graph Contrastive Learning with Augmentations

Graph Contrastive Learning with Augmentations

3 Apr 2021 | Yuning You1*, Tianlong Chen2*, Yongduo Sui3, Ting Chen4, Zhangyang Wang2, Yang Shen1
The paper introduces a novel framework called Graph Contrastive Learning (GraphCL) for pre-training graph neural networks (GNNs) on graph-structured data. Unlike traditional GNNs, which are often trained end-to-end under supervision, GraphCL employs self-supervised learning to enhance the generalizability, transferrability, and robustness of GNN representations. The framework incorporates four types of graph augmentations to introduce various priors into the learning process, including node dropping, edge perturbation, attribute masking, and subgraph sampling. These augmentations are designed to capture different aspects of graph data, such as structural and contextual information. The authors systematically evaluate the impact of different combinations of graph augmentations on multiple datasets in various settings, including semi-supervised learning, unsupervised representation learning, transfer learning, and adversarial attacks. The results show that GraphCL can achieve state-of-the-art performance without extensive hyper-parameter tuning or sophisticated GNN architectures. The framework is also shown to improve the robustness of GNNs against common adversarial attacks. The paper provides a detailed analysis of the role of data augmentations in GraphCL, highlighting their importance and the benefits of composing different types of augmentations. It also discusses the impact of augmentation strengths and patterns on performance, suggesting that more complex and diverse augmentations can lead to better results. The proposed framework is evaluated on several benchmark datasets, demonstrating its effectiveness in various tasks and settings.The paper introduces a novel framework called Graph Contrastive Learning (GraphCL) for pre-training graph neural networks (GNNs) on graph-structured data. Unlike traditional GNNs, which are often trained end-to-end under supervision, GraphCL employs self-supervised learning to enhance the generalizability, transferrability, and robustness of GNN representations. The framework incorporates four types of graph augmentations to introduce various priors into the learning process, including node dropping, edge perturbation, attribute masking, and subgraph sampling. These augmentations are designed to capture different aspects of graph data, such as structural and contextual information. The authors systematically evaluate the impact of different combinations of graph augmentations on multiple datasets in various settings, including semi-supervised learning, unsupervised representation learning, transfer learning, and adversarial attacks. The results show that GraphCL can achieve state-of-the-art performance without extensive hyper-parameter tuning or sophisticated GNN architectures. The framework is also shown to improve the robustness of GNNs against common adversarial attacks. The paper provides a detailed analysis of the role of data augmentations in GraphCL, highlighting their importance and the benefits of composing different types of augmentations. It also discusses the impact of augmentation strengths and patterns on performance, suggesting that more complex and diverse augmentations can lead to better results. The proposed framework is evaluated on several benchmark datasets, demonstrating its effectiveness in various tasks and settings.
Reach us at info@study.space
[slides and audio] Graph Contrastive Learning with Augmentations