Auto-Encoding Variational Bayes

Auto-Encoding Variational Bayes

10 Dec 2022 | Diederik P. Kingma, Max Welling
The paper introduces a novel stochastic variational inference and learning algorithm, Stochastic Gradient Variational Bayes (SGVB), for efficient inference in directed probabilistic models with continuous latent variables and intractable posterior distributions. The key contributions are two-fold: first, it shows that reparameterizing the variational lower bound yields a differentiable estimator that can be optimized using standard stochastic gradient methods; second, it proposes the Auto-Encoding Variational Bayes (AEVB) algorithm, which efficiently performs inference and learning for i.i.d. datasets with continuous latent variables per datapoint by fitting an approximate inference model (recognition model) to the intractable posterior using the SGVB estimator. The AEVB algorithm is particularly efficient for approximate posterior inference and parameter learning, and can be applied to a wide range of tasks such as recognition, denoising, representation, and visualization. The paper also discusses the theoretical advantages and experimental results, including comparisons with other methods like the wake-sleep algorithm and Monte Carlo EM.The paper introduces a novel stochastic variational inference and learning algorithm, Stochastic Gradient Variational Bayes (SGVB), for efficient inference in directed probabilistic models with continuous latent variables and intractable posterior distributions. The key contributions are two-fold: first, it shows that reparameterizing the variational lower bound yields a differentiable estimator that can be optimized using standard stochastic gradient methods; second, it proposes the Auto-Encoding Variational Bayes (AEVB) algorithm, which efficiently performs inference and learning for i.i.d. datasets with continuous latent variables per datapoint by fitting an approximate inference model (recognition model) to the intractable posterior using the SGVB estimator. The AEVB algorithm is particularly efficient for approximate posterior inference and parameter learning, and can be applied to a wide range of tasks such as recognition, denoising, representation, and visualization. The paper also discusses the theoretical advantages and experimental results, including comparisons with other methods like the wake-sleep algorithm and Monte Carlo EM.
Reach us at info@study.space