Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

18 Jun 2019 | Francesco Locatello 1 2 Stefan Bauer 2 Mario Lucic 3 Gunnar Rätsch 1 Sylvain Gelly 3 Bernhard Schölkopf 2 Olivier Bachem 3
This paper challenges common assumptions in unsupervised learning of disentangled representations. It theoretically proves that such learning is fundamentally impossible without inductive biases on both models and data. A large-scale experimental study with over 12,000 models across seven datasets shows that while methods enforce uncorrelated aggregated posteriors, the dimensions of the representation (taken as the mean) are often correlated. The study also finds that disentangled representations are not reliably learned without supervision, and increased disentanglement does not necessarily reduce sample complexity for downstream tasks. The paper suggests that future research should explicitly address the role of inductive biases and supervision, investigate the practical benefits of disentanglement, and use reproducible experimental setups across diverse datasets. It also introduces a new library, disentanglement_lib, to facilitate such research. The results indicate that current methods are not consistently effective in achieving disentangled representations, and that hyperparameters and random seeds significantly impact performance. The study highlights the need for more robust and diverse experimental evaluations to better understand the utility of disentangled representations in real-world tasks.This paper challenges common assumptions in unsupervised learning of disentangled representations. It theoretically proves that such learning is fundamentally impossible without inductive biases on both models and data. A large-scale experimental study with over 12,000 models across seven datasets shows that while methods enforce uncorrelated aggregated posteriors, the dimensions of the representation (taken as the mean) are often correlated. The study also finds that disentangled representations are not reliably learned without supervision, and increased disentanglement does not necessarily reduce sample complexity for downstream tasks. The paper suggests that future research should explicitly address the role of inductive biases and supervision, investigate the practical benefits of disentanglement, and use reproducible experimental setups across diverse datasets. It also introduces a new library, disentanglement_lib, to facilitate such research. The results indicate that current methods are not consistently effective in achieving disentangled representations, and that hyperparameters and random seeds significantly impact performance. The study highlights the need for more robust and diverse experimental evaluations to better understand the utility of disentangled representations in real-world tasks.
Reach us at info@study.space
[slides and audio] Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations