25 May 2016 | Aireza Makhzani, Jonathon Shlens & Navdeep Jaitly, Ian Goodfellow, Brendan Frey
The paper introduces the "adversarial autoencoder" (AAE), a probabilistic autoencoder that uses generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector with an arbitrary prior distribution. This ensures that generating from any part of the prior space results in meaningful samples. The decoder learns a deep generative model that maps the imposed prior to the data distribution. The AAE is applied to various tasks such as semi-supervised classification, disentangling style and content in images, unsupervised clustering, dimensionality reduction, and data visualization. Experiments on datasets like MNIST, Street View House Numbers, and Toronto Face show competitive results in generative modeling and semi-supervised classification. The AAE's training involves dual objectives: a reconstruction error criterion and an adversarial training criterion. The paper also discusses the relationship between AAEs and variational autoencoders (VAEs), generative moment matching networks (GMMNs), and GANs, highlighting how AAEs can impose complex distributions without explicit knowledge of their functional form. Additionally, the paper explores semi-supervised learning, unsupervised clustering, and dimensionality reduction using AAEs, demonstrating their effectiveness in these tasks.The paper introduces the "adversarial autoencoder" (AAE), a probabilistic autoencoder that uses generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector with an arbitrary prior distribution. This ensures that generating from any part of the prior space results in meaningful samples. The decoder learns a deep generative model that maps the imposed prior to the data distribution. The AAE is applied to various tasks such as semi-supervised classification, disentangling style and content in images, unsupervised clustering, dimensionality reduction, and data visualization. Experiments on datasets like MNIST, Street View House Numbers, and Toronto Face show competitive results in generative modeling and semi-supervised classification. The AAE's training involves dual objectives: a reconstruction error criterion and an adversarial training criterion. The paper also discusses the relationship between AAEs and variational autoencoders (VAEs), generative moment matching networks (GMMNs), and GANs, highlighting how AAEs can impose complex distributions without explicit knowledge of their functional form. Additionally, the paper explores semi-supervised learning, unsupervised clustering, and dimensionality reduction using AAEs, demonstrating their effectiveness in these tasks.