Graph Contrastive Invariant Learning from the Causal Perspective

Graph Contrastive Invariant Learning from the Causal Perspective

7 Mar 2024 | Yanhu Mo, Xiao Wang, Shaohua Fan, Chuan Shi
This paper proposes a novel graph contrastive learning method, GCIL, from the perspective of causality. The authors analyze graph contrastive learning (GCL) using the structural causal model (SCM) and find that traditional GCL may not learn invariant representations due to non-causal information in the graph. To address this, they introduce spectral graph augmentation to simulate intervention on non-causal factors and design two objectives: invariance and independence. The invariance objective encourages the encoder to capture invariant information in causal variables, while the independence objective reduces the influence of confounders on causal variables. Experimental results show that GCIL outperforms existing methods on node classification tasks across four datasets. The method achieves state-of-the-art performance on Cora, Citeseer, and Pubmed datasets, and performs well on Wiki-CS and Flickr. The proposed method effectively captures and utilizes orthogonal information in the representation, leading to improved performance in capturing meaningful features. The paper also includes ablation studies and hyper-parameter sensitivity analysis, demonstrating the effectiveness of the proposed objectives. The results indicate that GCIL is more effective than existing methods in learning invariant representations from graphs.This paper proposes a novel graph contrastive learning method, GCIL, from the perspective of causality. The authors analyze graph contrastive learning (GCL) using the structural causal model (SCM) and find that traditional GCL may not learn invariant representations due to non-causal information in the graph. To address this, they introduce spectral graph augmentation to simulate intervention on non-causal factors and design two objectives: invariance and independence. The invariance objective encourages the encoder to capture invariant information in causal variables, while the independence objective reduces the influence of confounders on causal variables. Experimental results show that GCIL outperforms existing methods on node classification tasks across four datasets. The method achieves state-of-the-art performance on Cora, Citeseer, and Pubmed datasets, and performs well on Wiki-CS and Flickr. The proposed method effectively captures and utilizes orthogonal information in the representation, leading to improved performance in capturing meaningful features. The paper also includes ablation studies and hyper-parameter sensitivity analysis, demonstrating the effectiveness of the proposed objectives. The results indicate that GCIL is more effective than existing methods in learning invariant representations from graphs.
Reach us at info@study.space
[slides] Graph Contrastive Invariant Learning from the Causal Perspective | StudySpace