Coupled Generative Adversarial Networks

Coupled Generative Adversarial Networks

20 Sep 2016 | Ming-Yu Liu, Oncel Tuzel
Coupled Generative Adversarial Networks (CoGAN) are proposed for learning a joint distribution of multi-domain images without requiring corresponding image pairs in the training data. Unlike existing methods that need corresponding images, CoGAN learns a joint distribution using only samples from individual domains' marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits network capacity and encourages a joint distribution solution over a product of marginal distributions. CoGAN consists of a pair of GANs, each responsible for one domain. By sharing weights in the layers that decode high-level semantics, CoGAN learns to generate corresponding images without explicit supervision. The framework is applied to tasks such as learning joint distributions of color and depth images, and face images with different attributes. It also demonstrates applications in domain adaptation and image transformation. CoGAN outperforms conditional GANs in generating corresponding images, achieving higher pixel agreement ratios. The framework is effective in learning joint distributions without correspondence supervision, and it is applicable to various domains, including faces, color and depth images. The results show that CoGAN can generate realistic images and adapt to new domains without labeled data. The method is based on weight-sharing constraints and adversarial training, which enable the learning of joint distributions from marginal samples. The framework is validated through experiments on digit, face, and RGBD image tasks, demonstrating its effectiveness in generating corresponding images and adapting to new domains.Coupled Generative Adversarial Networks (CoGAN) are proposed for learning a joint distribution of multi-domain images without requiring corresponding image pairs in the training data. Unlike existing methods that need corresponding images, CoGAN learns a joint distribution using only samples from individual domains' marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits network capacity and encourages a joint distribution solution over a product of marginal distributions. CoGAN consists of a pair of GANs, each responsible for one domain. By sharing weights in the layers that decode high-level semantics, CoGAN learns to generate corresponding images without explicit supervision. The framework is applied to tasks such as learning joint distributions of color and depth images, and face images with different attributes. It also demonstrates applications in domain adaptation and image transformation. CoGAN outperforms conditional GANs in generating corresponding images, achieving higher pixel agreement ratios. The framework is effective in learning joint distributions without correspondence supervision, and it is applicable to various domains, including faces, color and depth images. The results show that CoGAN can generate realistic images and adapt to new domains without labeled data. The method is based on weight-sharing constraints and adversarial training, which enable the learning of joint distributions from marginal samples. The framework is validated through experiments on digit, face, and RGBD image tasks, demonstrating its effectiveness in generating corresponding images and adapting to new domains.
Reach us at info@study.space