Unsupervised Domain Adaptation by Backpropagation

Unsupervised Domain Adaptation by Backpropagation

27 Feb 2015 | Yaroslav Ganin, Victor Lempitsky
This paper proposes a new approach for unsupervised domain adaptation in deep feed-forward architectures. The method allows training on a large amount of labeled data from the source domain and a large amount of unlabeled data from the target domain, without requiring labeled target-domain data. The key idea is to promote the emergence of "deep" features that are discriminative for the source domain's main learning task and invariant to domain shifts. This is achieved by jointly optimizing two discriminative classifiers: a label predictor and a domain classifier. The domain classifier is trained to discriminate between the source and target domains, while the label predictor is trained to predict the class labels. The parameters of the feature extractor are optimized to minimize the loss of the label predictor and maximize the loss of the domain classifier, encouraging domain-invariant features. The method is implemented using a standard feed-forward architecture with a gradient reversal layer that reverses the gradient during backpropagation. This allows the model to learn features that are both discriminative and invariant to domain shifts. The approach is shown to perform well in image classification tasks, achieving adaptation effects in the presence of large domain shifts and outperforming previous state-of-the-art methods on the Office dataset. The method is generic and can be applied to any deep learning architecture that can be trained with backpropagation. The paper also discusses related work, including other domain adaptation methods, and presents results on various image datasets and the Office benchmark. The proposed approach is shown to be effective in both supervised and unsupervised domain adaptation scenarios.This paper proposes a new approach for unsupervised domain adaptation in deep feed-forward architectures. The method allows training on a large amount of labeled data from the source domain and a large amount of unlabeled data from the target domain, without requiring labeled target-domain data. The key idea is to promote the emergence of "deep" features that are discriminative for the source domain's main learning task and invariant to domain shifts. This is achieved by jointly optimizing two discriminative classifiers: a label predictor and a domain classifier. The domain classifier is trained to discriminate between the source and target domains, while the label predictor is trained to predict the class labels. The parameters of the feature extractor are optimized to minimize the loss of the label predictor and maximize the loss of the domain classifier, encouraging domain-invariant features. The method is implemented using a standard feed-forward architecture with a gradient reversal layer that reverses the gradient during backpropagation. This allows the model to learn features that are both discriminative and invariant to domain shifts. The approach is shown to perform well in image classification tasks, achieving adaptation effects in the presence of large domain shifts and outperforming previous state-of-the-art methods on the Office dataset. The method is generic and can be applied to any deep learning architecture that can be trained with backpropagation. The paper also discusses related work, including other domain adaptation methods, and presents results on various image datasets and the Office benchmark. The proposed approach is shown to be effective in both supervised and unsupervised domain adaptation scenarios.
Reach us at info@study.space