PROGRESSIVE GROWING OF GANs FOR IMPROVED QUALITY, STABILITY, AND VARIATION

PROGRESSIVE GROWING OF GANs FOR IMPROVED QUALITY, STABILITY, AND VARIATION

26 Feb 2018 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
This paper introduces a new training methodology for generative adversarial networks (GANs) called progressive growing. The key idea is to grow both the generator and discriminator progressively, starting from low resolution and adding layers as training progresses. This approach speeds up training and improves stability, allowing the generation of high-quality images, such as 1024x1024 CELEBA images. The method also increases variation in generated images, achieving a record inception score of 8.80 on CIFAR10. Additional contributions include a new metric for evaluating GAN results and a higher-quality version of the CELEBA dataset. The GAN formulation allows for the generation of high-resolution images by progressively increasing the resolution of the generator and discriminator. This approach enables the networks to first learn large-scale structures and then refine details, leading to more stable training and better results. The method also addresses issues such as mode collapse and unhealthy competition between the generator and discriminator by using techniques like minibatch standard deviation and pixelwise feature vector normalization. The paper also presents experiments showing the effectiveness of the progressive growing approach, including comparisons with other methods and results on various datasets. The results demonstrate that the method produces high-quality images with good variation and stability, and that the approach is efficient and scalable to high resolutions. The method is implemented using a combination of techniques, including normalization, learning rate control, and careful training configuration. The results show that the method achieves state-of-the-art performance on several benchmark datasets, including CELEBA, LSUN, and CIFAR10.This paper introduces a new training methodology for generative adversarial networks (GANs) called progressive growing. The key idea is to grow both the generator and discriminator progressively, starting from low resolution and adding layers as training progresses. This approach speeds up training and improves stability, allowing the generation of high-quality images, such as 1024x1024 CELEBA images. The method also increases variation in generated images, achieving a record inception score of 8.80 on CIFAR10. Additional contributions include a new metric for evaluating GAN results and a higher-quality version of the CELEBA dataset. The GAN formulation allows for the generation of high-resolution images by progressively increasing the resolution of the generator and discriminator. This approach enables the networks to first learn large-scale structures and then refine details, leading to more stable training and better results. The method also addresses issues such as mode collapse and unhealthy competition between the generator and discriminator by using techniques like minibatch standard deviation and pixelwise feature vector normalization. The paper also presents experiments showing the effectiveness of the progressive growing approach, including comparisons with other methods and results on various datasets. The results demonstrate that the method produces high-quality images with good variation and stability, and that the approach is efficient and scalable to high resolutions. The method is implemented using a combination of techniques, including normalization, learning rate control, and careful training configuration. The results show that the method achieves state-of-the-art performance on several benchmark datasets, including CELEBA, LSUN, and CIFAR10.
Reach us at info@study.space