A Sparsity Principle for Partially Observable Causal Representation Learning

A Sparsity Principle for Partially Observable Causal Representation Learning

2024 | Danru Xu, Dingling Yao, Sébastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, Sara Magliacane
This paper addresses the challenge of causal representation learning in a partially observable setting, where not all causal variables are captured in the observations. Unlike previous work that assumes all latent variables are observed, this paper focuses on unpaired observations from a dataset with instance-dependent partial observability patterns. The main contributions are two identifiability results: one for linear mixing functions without parametric assumptions on the causal model, and another for piecewise linear mixing functions with Gaussian latent causal variables. Based on these results, the authors propose two methods that enforce sparsity in the inferred representation to estimate the underlying causal variables. Experiments on simulated datasets and established benchmarks demonstrate the effectiveness of the proposed methods in recovering the ground-truth latents. The paper also discusses related work and highlights limitations, such as the need for additional assumptions to extend the results to nonlinear mixing functions and the empirical difficulty of satisfying Gaussianity constraints.This paper addresses the challenge of causal representation learning in a partially observable setting, where not all causal variables are captured in the observations. Unlike previous work that assumes all latent variables are observed, this paper focuses on unpaired observations from a dataset with instance-dependent partial observability patterns. The main contributions are two identifiability results: one for linear mixing functions without parametric assumptions on the causal model, and another for piecewise linear mixing functions with Gaussian latent causal variables. Based on these results, the authors propose two methods that enforce sparsity in the inferred representation to estimate the underlying causal variables. Experiments on simulated datasets and established benchmarks demonstrate the effectiveness of the proposed methods in recovering the ground-truth latents. The paper also discusses related work and highlights limitations, such as the need for additional assumptions to extend the results to nonlinear mixing functions and the empirical difficulty of satisfying Gaussianity constraints.
Reach us at info@study.space
[slides and audio] A Sparsity Principle for Partially Observable Causal Representation Learning