7 Nov 2016 | Yuri Burda, Roger Grosse & Ruslan Salakhutdinov
The paper introduces the Importance Weighted Autoencoder (IWAE), a generative model that shares the same architecture as the Variational Autoencoder (VAE) but uses a tighter log-likelihood lower bound derived from importance weighting. The VAE makes strong assumptions about the posterior distribution, such as approximate factoriality and predictability with nonlinear regression, which can limit its expressive power. In contrast, the IWAE uses multiple samples to approximate the posterior, allowing it to model complex posteriors more flexibly. Empirical results show that IWAEs learn richer latent space representations and achieve better log-likelihood on density estimation benchmarks compared to VAEs. The IWAE's tighter lower bound and use of multiple samples improve its ability to capture complex data distributions, leading to improved generative performance.The paper introduces the Importance Weighted Autoencoder (IWAE), a generative model that shares the same architecture as the Variational Autoencoder (VAE) but uses a tighter log-likelihood lower bound derived from importance weighting. The VAE makes strong assumptions about the posterior distribution, such as approximate factoriality and predictability with nonlinear regression, which can limit its expressive power. In contrast, the IWAE uses multiple samples to approximate the posterior, allowing it to model complex posteriors more flexibly. Empirical results show that IWAEs learn richer latent space representations and achieve better log-likelihood on density estimation benchmarks compared to VAEs. The IWAE's tighter lower bound and use of multiple samples improve its ability to capture complex data distributions, leading to improved generative performance.