Learning to Discover Cross-Domain Relations with Generative Adversarial Networks

Learning to Discover Cross-Domain Relations with Generative Adversarial Networks

15 May 2017 | Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, Jiwon Kim
The paper "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim addresses the challenge of automatically discovering cross-domain relations using unpaired data. The authors propose a method called DiscoGAN, which is based on generative adversarial networks (GANs). DiscoGAN learns to map images from one domain to another while preserving key attributes such as orientation and face identity. The model is trained with two independently collected sets of images and does not require explicit pair labels. The core of DiscoGAN involves two coupled GANs, each ensuring that the generative functions can map each domain to its counterpart domain. The model uses a reconstruction loss to force the generated image to be a valid representation of the input image and to be close to images in the target domain. Experimental results on toy domains and real-world image datasets demonstrate that DiscoGAN effectively discovers cross-domain relations and successfully applies them in image translation tasks.The paper "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks" by Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim addresses the challenge of automatically discovering cross-domain relations using unpaired data. The authors propose a method called DiscoGAN, which is based on generative adversarial networks (GANs). DiscoGAN learns to map images from one domain to another while preserving key attributes such as orientation and face identity. The model is trained with two independently collected sets of images and does not require explicit pair labels. The core of DiscoGAN involves two coupled GANs, each ensuring that the generative functions can map each domain to its counterpart domain. The model uses a reconstruction loss to force the generated image to be a valid representation of the input image and to be close to images in the target domain. Experimental results on toy domains and real-world image datasets demonstrate that DiscoGAN effectively discovers cross-domain relations and successfully applies them in image translation tasks.
Reach us at info@study.space
[slides and audio] Learning to Discover Cross-Domain Relations with Generative Adversarial Networks