Coupled Generative Adversarial Networks

Coupled Generative Adversarial Networks

20 Sep 2016 | Ming-Yu Liu, Oncel Tuzel
The paper introduces Coupled Generative Adversarial Networks (CoGAN) for learning the joint distribution of multi-domain images without requiring corresponding images in the training set. Unlike existing approaches that need tuples of corresponding images, CoGAN leverages weight-sharing constraints to enforce a joint distribution solution over a product of marginal distributions. This allows CoGAN to learn from samples drawn only from the marginal distributions of individual domains. The framework consists of a pair of GANs, each responsible for synthesizing images in one domain, with shared weights in the layers responsible for high-level semantics. This forces the GANs to decode high-level semantics in the same way, while the layers for low-level details map the shared representation to images in different domains. The paper demonstrates CoGAN's effectiveness through various experiments, including generating color and depth images, face images with different attributes, and applying it to unsupervised domain adaptation and image transformation tasks. The results show that CoGAN can successfully learn the joint distribution without corresponding images, outperforming conditional GANs in certain tasks.The paper introduces Coupled Generative Adversarial Networks (CoGAN) for learning the joint distribution of multi-domain images without requiring corresponding images in the training set. Unlike existing approaches that need tuples of corresponding images, CoGAN leverages weight-sharing constraints to enforce a joint distribution solution over a product of marginal distributions. This allows CoGAN to learn from samples drawn only from the marginal distributions of individual domains. The framework consists of a pair of GANs, each responsible for synthesizing images in one domain, with shared weights in the layers responsible for high-level semantics. This forces the GANs to decode high-level semantics in the same way, while the layers for low-level details map the shared representation to images in different domains. The paper demonstrates CoGAN's effectiveness through various experiments, including generating color and depth images, face images with different attributes, and applying it to unsupervised domain adaptation and image transformation tasks. The results show that CoGAN can successfully learn the joint distribution without corresponding images, outperforming conditional GANs in certain tasks.
Reach us at info@study.space
[slides] Coupled Generative Adversarial Networks | StudySpace