Conditional Image Synthesis with Auxiliary Classifier GANs

Conditional Image Synthesis with Auxiliary Classifier GANs

20 Jul 2017 | Augustus Odena, Christopher Olah, Jonathon Shlens
This paper introduces a new method for improving the training of generative adversarial networks (GANs) for image synthesis, focusing on label conditioning to achieve global coherence in $128 \times 128$ resolution images. The authors expand on previous work to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high-resolution samples (128x128) provide more class information than low-resolution samples (32x32), with 128x128 samples being more than twice as discriminable as 32x32 samples. Additionally, 84.7% of ImageNet classes exhibit diversity comparable to real ImageNet data. The paper also introduces an auxiliary classifier GAN (AC-GAN) architecture, which combines class conditioning with an auxiliary decoder to reconstruct class labels, enhancing the quality and stability of the generated images. The AC-GAN model is trained on all 1000 ImageNet classes, and the authors demonstrate its effectiveness through various experiments, including image discriminability and diversity measurements. The results show that the AC-GAN model produces high-quality, diverse, and discriminable samples, outperforming previous state-of-the-art methods.This paper introduces a new method for improving the training of generative adversarial networks (GANs) for image synthesis, focusing on label conditioning to achieve global coherence in $128 \times 128$ resolution images. The authors expand on previous work to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high-resolution samples (128x128) provide more class information than low-resolution samples (32x32), with 128x128 samples being more than twice as discriminable as 32x32 samples. Additionally, 84.7% of ImageNet classes exhibit diversity comparable to real ImageNet data. The paper also introduces an auxiliary classifier GAN (AC-GAN) architecture, which combines class conditioning with an auxiliary decoder to reconstruct class labels, enhancing the quality and stability of the generated images. The AC-GAN model is trained on all 1000 ImageNet classes, and the authors demonstrate its effectiveness through various experiments, including image discriminability and diversity measurements. The results show that the AC-GAN model produces high-quality, diverse, and discriminable samples, outperforming previous state-of-the-art methods.
Reach us at info@study.space
[slides] Conditional Image Synthesis with Auxiliary Classifier GANs | StudySpace