Adversarial Autoencoders

Adversarial Autoencoders

25 May 2016 | Aireza Makhzani, Jonathon Shlens & Navdeep Jaitly, Ian Goodfellow, Brendan Frey
Adversarial autoencoders (AAEs) are a type of probabilistic autoencoder that uses generative adversarial networks (GANs) to perform variational inference by matching the aggregated posterior of the hidden code vector with an arbitrary prior distribution. This approach ensures that generating from any part of the prior space results in meaningful samples. The decoder of the AAE learns a deep generative model that maps the imposed prior to the data distribution. AAEs have been shown to be effective in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction, and data visualization. Experiments on MNIST, Street View House Numbers, and Toronto Face datasets demonstrate that AAEs achieve competitive results in generative modeling and semi-supervised classification tasks. The AAE framework combines the reconstruction error criterion of a traditional autoencoder with an adversarial training criterion that matches the aggregated posterior distribution of the latent representation to an arbitrary prior distribution. This training criterion has a strong connection to variational autoencoder (VAE) training. The result is that the encoder learns to convert the data distribution to the prior distribution, while the decoder learns a deep generative model that maps the imposed prior to the data distribution. AAEs can be used in semi-supervised learning scenarios where label information is incorporated into the adversarial training stage to better shape the distribution of the hidden code. This allows the model to regularize the latent representation more heavily. The AAE can also be used for unsupervised clustering, where the model disentangles discrete class variables from the continuous latent style variables in a purely unsupervised fashion. AAEs have been shown to achieve competitive results in dimensionality reduction and data visualization tasks. The adversarial regularization in AAEs prevents the manifold fracturing problem that is typically encountered in the embeddings learned by autoencoders. The AAE architecture can be used to embed images into larger dimensionalities, and the model can achieve good classification error rates in both supervised and semi-supervised settings. The AAE has been shown to outperform other models in terms of test likelihood and classification performance on real-valued MNIST and Toronto Face datasets.Adversarial autoencoders (AAEs) are a type of probabilistic autoencoder that uses generative adversarial networks (GANs) to perform variational inference by matching the aggregated posterior of the hidden code vector with an arbitrary prior distribution. This approach ensures that generating from any part of the prior space results in meaningful samples. The decoder of the AAE learns a deep generative model that maps the imposed prior to the data distribution. AAEs have been shown to be effective in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction, and data visualization. Experiments on MNIST, Street View House Numbers, and Toronto Face datasets demonstrate that AAEs achieve competitive results in generative modeling and semi-supervised classification tasks. The AAE framework combines the reconstruction error criterion of a traditional autoencoder with an adversarial training criterion that matches the aggregated posterior distribution of the latent representation to an arbitrary prior distribution. This training criterion has a strong connection to variational autoencoder (VAE) training. The result is that the encoder learns to convert the data distribution to the prior distribution, while the decoder learns a deep generative model that maps the imposed prior to the data distribution. AAEs can be used in semi-supervised learning scenarios where label information is incorporated into the adversarial training stage to better shape the distribution of the hidden code. This allows the model to regularize the latent representation more heavily. The AAE can also be used for unsupervised clustering, where the model disentangles discrete class variables from the continuous latent style variables in a purely unsupervised fashion. AAEs have been shown to achieve competitive results in dimensionality reduction and data visualization tasks. The adversarial regularization in AAEs prevents the manifold fracturing problem that is typically encountered in the embeddings learned by autoencoders. The AAE architecture can be used to embed images into larger dimensionalities, and the model can achieve good classification error rates in both supervised and semi-supervised settings. The AAE has been shown to outperform other models in terms of test likelihood and classification performance on real-valued MNIST and Toronto Face datasets.
Reach us at info@study.space