Training Generative Adversarial Networks with Limited Data

Training Generative Adversarial Networks with Limited Data

7 Oct 2020 | Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila
The paper "Training Generative Adversarial Networks with Limited Data" by Tero Karras addresses the challenge of training GANs using a small number of training images, which often leads to discriminator overfitting and divergence. The authors propose an adaptive discriminator augmentation mechanism that stabilizes training in limited data regimes without requiring changes to loss functions or network architectures. This approach is applicable both for training from scratch and fine-tuning an existing GAN on a new dataset. The method involves applying a diverse set of augmentations to the discriminator's input, ensuring that these augmentations do not leak to the generated images. The effectiveness of the method is demonstrated on several datasets, showing that it can achieve good results with only a few thousand training images, often matching or surpassing the performance of StyleGAN2 with an order of magnitude fewer images. The paper also highlights that the CIFAR-10 benchmark is limited in data and improves the FID score significantly. The authors provide a comprehensive analysis of the conditions under which augmentations do not leak and design a pipeline of 18 transformations to achieve this. They also introduce an adaptive control scheme to dynamically adjust the augmentation strength based on the degree of overfitting, further enhancing the method's effectiveness. The paper concludes by discussing the broader impact of the method, emphasizing its potential to enable high-quality generative models in various applied fields with limited data.The paper "Training Generative Adversarial Networks with Limited Data" by Tero Karras addresses the challenge of training GANs using a small number of training images, which often leads to discriminator overfitting and divergence. The authors propose an adaptive discriminator augmentation mechanism that stabilizes training in limited data regimes without requiring changes to loss functions or network architectures. This approach is applicable both for training from scratch and fine-tuning an existing GAN on a new dataset. The method involves applying a diverse set of augmentations to the discriminator's input, ensuring that these augmentations do not leak to the generated images. The effectiveness of the method is demonstrated on several datasets, showing that it can achieve good results with only a few thousand training images, often matching or surpassing the performance of StyleGAN2 with an order of magnitude fewer images. The paper also highlights that the CIFAR-10 benchmark is limited in data and improves the FID score significantly. The authors provide a comprehensive analysis of the conditions under which augmentations do not leak and design a pipeline of 18 transformations to achieve this. They also introduce an adaptive control scheme to dynamically adjust the augmentation strength based on the degree of overfitting, further enhancing the method's effectiveness. The paper concludes by discussing the broader impact of the method, emphasizing its potential to enable high-quality generative models in various applied fields with limited data.
Reach us at info@study.space
[slides and audio] Training Generative Adversarial Networks with Limited Data