Diverse Image-to-Image Translation via Disentangled Representations

Diverse Image-to-Image Translation via Disentangled Representations

2 Aug 2018 | Hsin-Ying Lee*1, Hung-Yu Tseng*1, Jia-Bin Huang2, Maneesh Singh3, Ming-Hsuan Yang1,4
The paper "Diverse Image-to-Image Translation via Disentangled Representations" addresses the challenges of image-to-image translation (I2I) in scenarios where aligned training pairs are unavailable and multiple outputs are possible from a single input. The authors propose a disentangled representation framework that embeds images into a domain-invariant content space and a domain-specific attribute space. This framework enables the model to generate diverse outputs by conditioning on both content features and attribute vectors. To handle unpaired training data, a novel cross-cycle consistency loss is introduced, which ensures that the model can reconstruct the original input images after performing cyclic translations. The proposed method is evaluated on various I2I tasks, demonstrating superior performance in terms of both realism and diversity compared to existing methods. Additionally, the model is applied to unsupervised domain adaptation, showing competitive results on datasets such as MNIST-M and Cropped LineMod. The contributions of the paper include the introduction of a disentangled representation framework, the development of a cross-cycle consistency loss, and the demonstration of the model's effectiveness in diverse I2I translation and domain adaptation tasks.The paper "Diverse Image-to-Image Translation via Disentangled Representations" addresses the challenges of image-to-image translation (I2I) in scenarios where aligned training pairs are unavailable and multiple outputs are possible from a single input. The authors propose a disentangled representation framework that embeds images into a domain-invariant content space and a domain-specific attribute space. This framework enables the model to generate diverse outputs by conditioning on both content features and attribute vectors. To handle unpaired training data, a novel cross-cycle consistency loss is introduced, which ensures that the model can reconstruct the original input images after performing cyclic translations. The proposed method is evaluated on various I2I tasks, demonstrating superior performance in terms of both realism and diversity compared to existing methods. Additionally, the model is applied to unsupervised domain adaptation, showing competitive results on datasets such as MNIST-M and Cropped LineMod. The contributions of the paper include the introduction of a disentangled representation framework, the development of a cross-cycle consistency loss, and the demonstration of the model's effectiveness in diverse I2I translation and domain adaptation tasks.
Reach us at info@study.space
[slides] Diverse Image-to-Image Translation via Disentangled Representations | StudySpace