UNROLLED GENERATIVE ADVERSARIAL NETWORKS

UNROLLED GENERATIVE ADVERSARIAL NETWORKS

12 May 2017 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
Unrolled Generative Adversarial Networks (UGANs) stabilize GAN training by defining the generator objective with respect to an unrolled optimization of the discriminator. This approach balances the use of the optimal discriminator (ideal but infeasible) and the current discriminator (often unstable), leading to improved stability and diversity in generated samples. UGANs address mode collapse, stabilize training of complex recurrent generators, and enhance coverage of the data distribution. The method involves unrolling the discriminator's optimization steps during training, creating a surrogate objective function that approximates the true generator objective. This allows the generator to consider how the discriminator would respond to its actions, reducing the tendency to collapse to a single mode. The unrolled objective is computed by simulating multiple discriminator updates, which helps the generator spread its probability mass more effectively. Experiments on various datasets, including a 2D mixture of Gaussians, a pathological model with mismatched generator and discriminator, and augmented MNIST, demonstrate that UGANs improve mode coverage and stability. The technique is particularly effective in reducing discrete and manifold collapse, leading to better sample diversity and quality. In the CIFAR10 dataset, UGANs show improved performance in generating diverse and realistic images, with better reconstruction of training data and reduced mode collapse. The method also enhances the ability to generate images that closely match specific training samples, as measured by inference via optimization and pairwise distance distributions. While UGANs increase computational cost, they offer significant improvements in GAN training stability and performance. The approach bridges the gap between theoretical and practical results in GAN training and highlights the importance of developing better update rules for generators and discriminators. Future work could extend the method to unroll both generator and discriminator updates, further enhancing the recursive nature of the training process.Unrolled Generative Adversarial Networks (UGANs) stabilize GAN training by defining the generator objective with respect to an unrolled optimization of the discriminator. This approach balances the use of the optimal discriminator (ideal but infeasible) and the current discriminator (often unstable), leading to improved stability and diversity in generated samples. UGANs address mode collapse, stabilize training of complex recurrent generators, and enhance coverage of the data distribution. The method involves unrolling the discriminator's optimization steps during training, creating a surrogate objective function that approximates the true generator objective. This allows the generator to consider how the discriminator would respond to its actions, reducing the tendency to collapse to a single mode. The unrolled objective is computed by simulating multiple discriminator updates, which helps the generator spread its probability mass more effectively. Experiments on various datasets, including a 2D mixture of Gaussians, a pathological model with mismatched generator and discriminator, and augmented MNIST, demonstrate that UGANs improve mode coverage and stability. The technique is particularly effective in reducing discrete and manifold collapse, leading to better sample diversity and quality. In the CIFAR10 dataset, UGANs show improved performance in generating diverse and realistic images, with better reconstruction of training data and reduced mode collapse. The method also enhances the ability to generate images that closely match specific training samples, as measured by inference via optimization and pairwise distance distributions. While UGANs increase computational cost, they offer significant improvements in GAN training stability and performance. The approach bridges the gap between theoretical and practical results in GAN training and highlights the importance of developing better update rules for generators and discriminators. Future work could extend the method to unroll both generator and discriminator updates, further enhancing the recursive nature of the training process.
Reach us at info@study.space
Understanding Unrolled Generative Adversarial Networks