30 Jan 2017 | Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
This paper introduces a new type of normalizing flow called inverse autoregressive flow (IAF), which is effective for high-dimensional latent spaces. IAF is based on autoregressive neural networks and allows for flexible and efficient variational inference. The method improves upon diagonal Gaussian approximate posteriors and is competitive with neural autoregressive models in terms of log-likelihood on natural images, while enabling faster synthesis. The IAF framework consists of a chain of invertible transformations, each based on an autoregressive neural network. The method is demonstrated through experiments with deep variational autoencoders, where it achieves improved performance. The paper also discusses related work, including other normalizing flows like NICE and Hamiltonian flows, and compares the performance of IAF with these methods. The experiments show that IAF achieves state-of-the-art results on MNIST and CIFAR-10 datasets, with significant improvements in log-likelihood and faster sampling. The paper concludes that IAF is a promising approach for scalable variational inference in high-dimensional spaces.This paper introduces a new type of normalizing flow called inverse autoregressive flow (IAF), which is effective for high-dimensional latent spaces. IAF is based on autoregressive neural networks and allows for flexible and efficient variational inference. The method improves upon diagonal Gaussian approximate posteriors and is competitive with neural autoregressive models in terms of log-likelihood on natural images, while enabling faster synthesis. The IAF framework consists of a chain of invertible transformations, each based on an autoregressive neural network. The method is demonstrated through experiments with deep variational autoencoders, where it achieves improved performance. The paper also discusses related work, including other normalizing flows like NICE and Hamiltonian flows, and compares the performance of IAF with these methods. The experiments show that IAF achieves state-of-the-art results on MNIST and CIFAR-10 datasets, with significant improvements in log-likelihood and faster sampling. The paper concludes that IAF is a promising approach for scalable variational inference in high-dimensional spaces.