Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

24 Aug 2020 | Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros
CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks This paper presents a method for learning to translate images between two domains without paired training examples. The method uses cycle-consistent adversarial networks to learn a mapping from domain X to domain Y and vice versa. The cycle consistency loss ensures that translating an image from X to Y and back to X results in the original image. The method is evaluated on several tasks where paired training data is not available, including collection style transfer, object transfiguration, season transfer, and photo enhancement. The method outperforms previous approaches that rely on hand-defined factorizations of style and content or shared embedding functions. The method is implemented in both PyTorch and Torch and is available on the authors' website. The method is based on the idea of image-to-image translation, which involves learning a mapping between two image domains. The method uses adversarial losses to ensure that the generated images are indistinguishable from real images in the target domain. The cycle consistency loss ensures that the mapping is consistent in both directions. The method is trained on unpaired data, where the goal is to relate two data domains: X and Y. The method uses two adversarial discriminators to distinguish between images in the source and target domains. The method is evaluated on several tasks, including semantic label-to-photo translation on the Cityscapes dataset and map-to-aerial photo translation on Google Maps data. The method is also evaluated on paired datasets used in "pix2pix" such as architectural labels-to-photos and edges-to-shoes. The method is compared against several baselines, including CoGAN, SimGAN, BiGAN/ALI, and pix2pix. The method outperforms these baselines in terms of both qualitative and quantitative results. The method is also evaluated on several applications where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, and photo enhancement. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different. The method is implemented using a deep neural network architecture that includes three convolutions, several residual blocks, and two fractionally-striped convolutions. The method uses a least-squares loss instead of the standard GAN loss to stabilize training. The method is trained on a variety of datasets, including the Cityscapes dataset and Google Maps data. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different. The method is also able to generate high-quality results in applications where paired training data is not available, such as photo enhancement and object transfiguration. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different.CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks This paper presents a method for learning to translate images between two domains without paired training examples. The method uses cycle-consistent adversarial networks to learn a mapping from domain X to domain Y and vice versa. The cycle consistency loss ensures that translating an image from X to Y and back to X results in the original image. The method is evaluated on several tasks where paired training data is not available, including collection style transfer, object transfiguration, season transfer, and photo enhancement. The method outperforms previous approaches that rely on hand-defined factorizations of style and content or shared embedding functions. The method is implemented in both PyTorch and Torch and is available on the authors' website. The method is based on the idea of image-to-image translation, which involves learning a mapping between two image domains. The method uses adversarial losses to ensure that the generated images are indistinguishable from real images in the target domain. The cycle consistency loss ensures that the mapping is consistent in both directions. The method is trained on unpaired data, where the goal is to relate two data domains: X and Y. The method uses two adversarial discriminators to distinguish between images in the source and target domains. The method is evaluated on several tasks, including semantic label-to-photo translation on the Cityscapes dataset and map-to-aerial photo translation on Google Maps data. The method is also evaluated on paired datasets used in "pix2pix" such as architectural labels-to-photos and edges-to-shoes. The method is compared against several baselines, including CoGAN, SimGAN, BiGAN/ALI, and pix2pix. The method outperforms these baselines in terms of both qualitative and quantitative results. The method is also evaluated on several applications where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, and photo enhancement. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different. The method is implemented using a deep neural network architecture that includes three convolutions, several residual blocks, and two fractionally-striped convolutions. The method uses a least-squares loss instead of the standard GAN loss to stabilize training. The method is trained on a variety of datasets, including the Cityscapes dataset and Google Maps data. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different. The method is also able to generate high-quality results in applications where paired training data is not available, such as photo enhancement and object transfiguration. The method is able to generate high-quality results in these applications, even when the input and output domains are significantly different.
Reach us at info@study.space