Unsupervised Cross-Domain Image Generation

Unsupervised Cross-Domain Image Generation

7 Nov 2016 | Yaniv Taigman, Adam Polyak & Lior Wolf
The paper "Unsupervised Cross-Domain Image Generation" by Yaniv Taigman, Adam Polyak, and Lior Wolf from Facebook AI Research addresses the problem of transferring samples from one domain to another while preserving the function \( f \) that maps inputs to a target domain. The authors propose the Domain Transfer Network (DTN), which employs a compound loss function that includes a multiclass GAN loss, an \( f \)-constancy component, and a regularizing component to encourage \( G \) to map samples from the target domain to themselves. The DTN is evaluated in two visual domains: digits and face images. In the digit domain, the method transfers images from the Street View House Number (SVHN) dataset to the MNIST dataset, achieving visually appealing results. In the face domain, the method transfers random face images to a space of emoji images, generating face emojis that capture facial characteristics more accurately than manually created emojis. The paper also discusses the limitations of the method, such as the asymmetry in the effectiveness of the function \( f \) across domains and the need for explicit domain adaptation. Additionally, it explores the application of domain transfer for unsupervised domain adaptation and demonstrates its effectiveness in this context. The authors conclude that domain transfer is a promising direction for future research, particularly in computational tasks where unsupervised methods are useful.The paper "Unsupervised Cross-Domain Image Generation" by Yaniv Taigman, Adam Polyak, and Lior Wolf from Facebook AI Research addresses the problem of transferring samples from one domain to another while preserving the function \( f \) that maps inputs to a target domain. The authors propose the Domain Transfer Network (DTN), which employs a compound loss function that includes a multiclass GAN loss, an \( f \)-constancy component, and a regularizing component to encourage \( G \) to map samples from the target domain to themselves. The DTN is evaluated in two visual domains: digits and face images. In the digit domain, the method transfers images from the Street View House Number (SVHN) dataset to the MNIST dataset, achieving visually appealing results. In the face domain, the method transfers random face images to a space of emoji images, generating face emojis that capture facial characteristics more accurately than manually created emojis. The paper also discusses the limitations of the method, such as the asymmetry in the effectiveness of the function \( f \) across domains and the need for explicit domain adaptation. Additionally, it explores the application of domain transfer for unsupervised domain adaptation and demonstrates its effectiveness in this context. The authors conclude that domain transfer is a promising direction for future research, particularly in computational tasks where unsupervised methods are useful.
Reach us at info@study.space