30 Jan 2017 | Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
The paper introduces Inverse Autoregressive Flow (IAF), a new type of normalizing flow designed to scale well to high-dimensional latent spaces. IAF is based on invertible transformations derived from autoregressive neural networks, which are flexible and computationally efficient. The authors demonstrate that IAF significantly improves the flexibility of posterior distributions in variational autoencoders (VAEs) compared to diagonal Gaussian approximate posteriors. They also show that combining IAF with a novel type of VAE achieves competitive log-likelihood results on natural images with faster synthesis speeds. The paper includes experimental results on the MNIST and CIFAR-10 datasets, highlighting the effectiveness of IAF in improving variational inference and learning.The paper introduces Inverse Autoregressive Flow (IAF), a new type of normalizing flow designed to scale well to high-dimensional latent spaces. IAF is based on invertible transformations derived from autoregressive neural networks, which are flexible and computationally efficient. The authors demonstrate that IAF significantly improves the flexibility of posterior distributions in variational autoencoders (VAEs) compared to diagonal Gaussian approximate posteriors. They also show that combining IAF with a novel type of VAE achieves competitive log-likelihood results on natural images with faster synthesis speeds. The paper includes experimental results on the MNIST and CIFAR-10 datasets, highlighting the effectiveness of IAF in improving variational inference and learning.