24 Aug 2020 | Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros
The paper presents a method for unpaired image-to-image translation, where the goal is to learn a mapping from an input domain \( X \) to a target domain \( Y \) without paired training examples. The method combines adversarial losses and cycle consistency losses to ensure that the learned mappings are both accurate and consistent. The adversarial losses force the translated images to be indistinguishable from real images in the target domain, while the cycle consistency losses ensure that the mappings are inverses of each other, maintaining the integrity of the translation process. The authors demonstrate the effectiveness of their method on various tasks, including collection style transfer, object transfiguration, season transfer, and photo enhancement, showing superior performance compared to previous methods. The paper also includes a detailed evaluation of the method's performance on paired datasets and discusses its limitations and future directions.The paper presents a method for unpaired image-to-image translation, where the goal is to learn a mapping from an input domain \( X \) to a target domain \( Y \) without paired training examples. The method combines adversarial losses and cycle consistency losses to ensure that the learned mappings are both accurate and consistent. The adversarial losses force the translated images to be indistinguishable from real images in the target domain, while the cycle consistency losses ensure that the mappings are inverses of each other, maintaining the integrity of the translation process. The authors demonstrate the effectiveness of their method on various tasks, including collection style transfer, object transfiguration, season transfer, and photo enhancement, showing superior performance compared to previous methods. The paper also includes a detailed evaluation of the method's performance on paired datasets and discusses its limitations and future directions.