9 Oct 2018 | Zili Yi1,2, Hao Zhang2, Ping Tan2, and Minglun Gong1
DualGAN is a novel unsupervised dual learning framework for image-to-image translation, inspired by dual learning from natural language processing. It enables the training of image translators using two sets of unlabeled images from different domains. The primal GAN learns to translate images from domain U to domain V, while the dual GAN learns to invert this task. This closed-loop system allows for the reconstruction of images from either domain. The loss function used to train the translators accounts for the reconstruction error of images. Experiments on various image translation tasks with unlabeled data show that DualGAN outperforms single GANs and can achieve comparable or better results than conditional GANs trained on fully labeled data. The effectiveness of DualGAN is validated through comparisons with GANs and conditional GANs, demonstrating its ability to handle a wide range of image-to-image translation tasks.DualGAN is a novel unsupervised dual learning framework for image-to-image translation, inspired by dual learning from natural language processing. It enables the training of image translators using two sets of unlabeled images from different domains. The primal GAN learns to translate images from domain U to domain V, while the dual GAN learns to invert this task. This closed-loop system allows for the reconstruction of images from either domain. The loss function used to train the translators accounts for the reconstruction error of images. Experiments on various image translation tasks with unlabeled data show that DualGAN outperforms single GANs and can achieve comparable or better results than conditional GANs trained on fully labeled data. The effectiveness of DualGAN is validated through comparisons with GANs and conditional GANs, demonstrating its ability to handle a wide range of image-to-image translation tasks.