StarGAN v2: Diverse Image Synthesis for Multiple Domains

StarGAN v2: Diverse Image Synthesis for Multiple Domains

26 Apr 2020 | Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha
StarGAN v2 is a deep learning model designed for diverse image synthesis across multiple domains. It addresses the challenges of generating diverse and high-quality images while supporting multiple domains. The model introduces a style code that captures domain-specific styles, allowing for more flexible and scalable image generation. It includes a mapping network and a style encoder to generate and extract style codes, respectively. The model also incorporates diversity regularization and cycle consistency loss to ensure diverse and realistic image synthesis. StarGAN v2 was evaluated on the CelebA-HQ and newly collected AFHQ datasets, showing superior performance in terms of visual quality, diversity, and scalability compared to existing methods. The AFHQ dataset, consisting of high-quality animal faces with large inter- and intra-domain variations, was released for research purposes. The model's performance was further validated through human evaluations, demonstrating its effectiveness in generating images that reflect the styles of reference images. StarGAN v2 is a scalable and efficient approach for multi-domain image-to-image translation, offering significant improvements over previous methods.StarGAN v2 is a deep learning model designed for diverse image synthesis across multiple domains. It addresses the challenges of generating diverse and high-quality images while supporting multiple domains. The model introduces a style code that captures domain-specific styles, allowing for more flexible and scalable image generation. It includes a mapping network and a style encoder to generate and extract style codes, respectively. The model also incorporates diversity regularization and cycle consistency loss to ensure diverse and realistic image synthesis. StarGAN v2 was evaluated on the CelebA-HQ and newly collected AFHQ datasets, showing superior performance in terms of visual quality, diversity, and scalability compared to existing methods. The AFHQ dataset, consisting of high-quality animal faces with large inter- and intra-domain variations, was released for research purposes. The model's performance was further validated through human evaluations, demonstrating its effectiveness in generating images that reflect the styles of reference images. StarGAN v2 is a scalable and efficient approach for multi-domain image-to-image translation, offering significant improvements over previous methods.
Reach us at info@study.space
[slides] StarGAN v2%3A Diverse Image Synthesis for Multiple Domains | StudySpace