12 May 2017 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
The paper introduces a method to stabilize Generative Adversarial Networks (GANs) by defining the generator's objective in terms of an unrolled optimization of the discriminator. This approach allows for a balance between using the optimal discriminator in the generator's objective, which is ideal but impractical, and using the current value of the discriminator, which often leads to instability and poor solutions. The technique addresses common issues such as mode collapse, stabilizes training with complex recurrent generators, and enhances the diversity and coverage of the data distribution generated by the generator. The authors demonstrate the effectiveness of this method through experiments on various datasets, showing improved mode coverage, stability, and diversity in the generated samples. The main drawback is the increased computational cost per training step, which scales linearly with the number of unrolling steps.The paper introduces a method to stabilize Generative Adversarial Networks (GANs) by defining the generator's objective in terms of an unrolled optimization of the discriminator. This approach allows for a balance between using the optimal discriminator in the generator's objective, which is ideal but impractical, and using the current value of the discriminator, which often leads to instability and poor solutions. The technique addresses common issues such as mode collapse, stabilizes training with complex recurrent generators, and enhances the diversity and coverage of the data distribution generated by the generator. The authors demonstrate the effectiveness of this method through experiments on various datasets, showing improved mode coverage, stability, and diversity in the generated samples. The main drawback is the increased computational cost per training step, which scales linearly with the number of unrolling steps.