Domain Separation Networks

Domain Separation Networks

22 Aug 2016 | Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, Dumitru Erhan
Domain Separation Networks (DSNs) are introduced to learn domain-invariant representations by explicitly modeling domain-specific and shared components. The method addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain in unsupervised domain adaptation. DSNs separate the representation space into private and shared subspaces, where private components capture domain-specific properties and shared components are invariant across domains. This approach improves the model's ability to extract domain-invariant features and enables visual interpretation of the domain adaptation process. The DSN architecture includes a shared encoder, a private encoder, and a shared decoder. The shared encoder learns representations common to both domains, while the private encoder captures domain-specific features. The shared decoder reconstructs images using both private and shared representations. The model is trained to minimize a combination of task loss, reconstruction loss, difference loss, and similarity loss. The difference loss encourages orthogonality between shared and private representations, while the similarity loss ensures shared representations are similar across domains. DSNs outperform existing domain adaptation methods on various tasks, including object classification and pose estimation. They are evaluated on datasets such as MNIST, MNIST-M, GTSRB, SVHN, and LINEMOD. The method is effective in scenarios where the source and target domains differ in low-level image statistics but share high-level parameters. The DSNs produce visualizations of private and shared representations, enabling interpretation of the domain adaptation process. The paper also discusses related work, including methods like Correlation Alignment (CORAL), Domain-Adversarial Neural Networks (DANN), and Maximum Mean Discrepancy (MMD) regularization. The DSN approach is compared against these methods in experiments, demonstrating superior performance in domain adaptation tasks. The method is particularly effective in scenarios where the target domain is unlabeled and the source domain is labeled. The results show that DSNs achieve higher accuracy in classification and pose estimation tasks compared to existing methods. The model's ability to separate domain-specific and shared representations makes it a powerful tool for unsupervised domain adaptation.Domain Separation Networks (DSNs) are introduced to learn domain-invariant representations by explicitly modeling domain-specific and shared components. The method addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain in unsupervised domain adaptation. DSNs separate the representation space into private and shared subspaces, where private components capture domain-specific properties and shared components are invariant across domains. This approach improves the model's ability to extract domain-invariant features and enables visual interpretation of the domain adaptation process. The DSN architecture includes a shared encoder, a private encoder, and a shared decoder. The shared encoder learns representations common to both domains, while the private encoder captures domain-specific features. The shared decoder reconstructs images using both private and shared representations. The model is trained to minimize a combination of task loss, reconstruction loss, difference loss, and similarity loss. The difference loss encourages orthogonality between shared and private representations, while the similarity loss ensures shared representations are similar across domains. DSNs outperform existing domain adaptation methods on various tasks, including object classification and pose estimation. They are evaluated on datasets such as MNIST, MNIST-M, GTSRB, SVHN, and LINEMOD. The method is effective in scenarios where the source and target domains differ in low-level image statistics but share high-level parameters. The DSNs produce visualizations of private and shared representations, enabling interpretation of the domain adaptation process. The paper also discusses related work, including methods like Correlation Alignment (CORAL), Domain-Adversarial Neural Networks (DANN), and Maximum Mean Discrepancy (MMD) regularization. The DSN approach is compared against these methods in experiments, demonstrating superior performance in domain adaptation tasks. The method is particularly effective in scenarios where the target domain is unlabeled and the source domain is labeled. The results show that DSNs achieve higher accuracy in classification and pose estimation tasks compared to existing methods. The model's ability to separate domain-specific and shared representations makes it a powerful tool for unsupervised domain adaptation.
Reach us at info@study.space
[slides and audio] Domain Separation Networks