Wasserstein Auto-Encoders

Wasserstein Auto-Encoders

5 Dec 2019 | Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schölkopf
Wasserstein Auto-Encoders (WAE) are a new algorithm for building generative models of data distributions. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, leading to a different regularizer than the one used by Variational Auto-Encoders (VAE). This regularizer encourages the encoded training distribution to match the prior. WAE is a generalization of adversarial auto-encoders (AAE). Experiments show that WAE shares many properties of VAEs, such as stable training, encoder-decoder architecture, and nice latent manifold structure, while generating samples of better quality, as measured by the FID score. The paper proposes WAE, which minimizes the optimal transport distance between the true data distribution and a latent variable model. The main contributions include a new family of regularized auto-encoders, empirical evaluation on MNIST and CelebA datasets, and two different regularizers: one based on GANs and another using maximum mean discrepancy (MMD). The theoretical considerations suggest that the primal form of the Wasserstein distance is equivalent to a problem involving the optimization of a probabilistic encoder. WAE is compared with other techniques, including VAEs and GANs, and is shown to generate better quality samples. The paper also discusses related work, including auto-encoders, optimal transport, and GANs. The experiments demonstrate that WAE performs well in terms of sample quality and latent manifold structure, with WAE-MMD achieving slightly better results than VAE and WAE-GAN achieving the best results overall. The paper concludes that WAE provides a new approach to generative modeling, with potential for further exploration of the criteria for matching the encoded distribution to the prior distribution.Wasserstein Auto-Encoders (WAE) are a new algorithm for building generative models of data distributions. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, leading to a different regularizer than the one used by Variational Auto-Encoders (VAE). This regularizer encourages the encoded training distribution to match the prior. WAE is a generalization of adversarial auto-encoders (AAE). Experiments show that WAE shares many properties of VAEs, such as stable training, encoder-decoder architecture, and nice latent manifold structure, while generating samples of better quality, as measured by the FID score. The paper proposes WAE, which minimizes the optimal transport distance between the true data distribution and a latent variable model. The main contributions include a new family of regularized auto-encoders, empirical evaluation on MNIST and CelebA datasets, and two different regularizers: one based on GANs and another using maximum mean discrepancy (MMD). The theoretical considerations suggest that the primal form of the Wasserstein distance is equivalent to a problem involving the optimization of a probabilistic encoder. WAE is compared with other techniques, including VAEs and GANs, and is shown to generate better quality samples. The paper also discusses related work, including auto-encoders, optimal transport, and GANs. The experiments demonstrate that WAE performs well in terms of sample quality and latent manifold structure, with WAE-MMD achieving slightly better results than VAE and WAE-GAN achieving the best results overall. The paper concludes that WAE provides a new approach to generative modeling, with potential for further exploration of the criteria for matching the encoded distribution to the prior distribution.
Reach us at info@study.space
[slides] Wasserstein Auto-Encoders | StudySpace