May 13–17, 2024, Singapore | Qitian Wu, Fan Nie, Chenxiao Yang, Tianyi Bao, Junchi Yan
The paper "Graph Out-of-Distribution Generalization via Causal Intervention" addresses the challenge of out-of-distribution (OOD) generalization in graph neural networks (GNNs), which often fail to perform well when encountering distribution shifts. The authors adopt a bottom-up data-generative perspective and identify that the root cause of GNNs' failure lies in the latent confounding bias from the environment, which mis guides the model to leverage environment-sensitive correlations between ego-graph features and target node labels. To counter this, they propose a novel approach called Causal Intervention for Network Data (CAInet), which introduces a new learning objective derived from causal inference. This objective coordinates an environment estimator and a mixture-of-expert GNN predictor to counteract the confounding bias and facilitate the learning of generalizable predictive relations. Extensive experiments on various graph datasets demonstrate that CAInet significantly improves generalization performance, achieving up to 27.4% accuracy improvement over state-of-the-art methods in graph OOD generalization benchmarks. The proposed method does not require prior knowledge of environment labels and effectively handles different types of distribution shifts.The paper "Graph Out-of-Distribution Generalization via Causal Intervention" addresses the challenge of out-of-distribution (OOD) generalization in graph neural networks (GNNs), which often fail to perform well when encountering distribution shifts. The authors adopt a bottom-up data-generative perspective and identify that the root cause of GNNs' failure lies in the latent confounding bias from the environment, which mis guides the model to leverage environment-sensitive correlations between ego-graph features and target node labels. To counter this, they propose a novel approach called Causal Intervention for Network Data (CAInet), which introduces a new learning objective derived from causal inference. This objective coordinates an environment estimator and a mixture-of-expert GNN predictor to counteract the confounding bias and facilitate the learning of generalizable predictive relations. Extensive experiments on various graph datasets demonstrate that CAInet significantly improves generalization performance, achieving up to 27.4% accuracy improvement over state-of-the-art methods in graph OOD generalization benchmarks. The proposed method does not require prior knowledge of environment labels and effectively handles different types of distribution shifts.