8 Oct 2015 | Eric Tzeng*, Judy Hoffman*, Trevor Darrell, Kate Saenko
This paper proposes a new CNN architecture for simultaneous domain and task transfer, which outperforms existing methods on two standard visual domain adaptation tasks. The approach simultaneously optimizes for domain invariance and task transfer by using a domain confusion loss and a soft label distribution matching loss. The domain confusion loss aims to make the marginal distributions of the source and target domains as similar as possible, while the soft label loss transfers empirical category correlations from the source to the target domain. The proposed method is evaluated on the Office benchmark and a new cross-dataset collection, demonstrating superior performance in both supervised and semi-supervised adaptation settings. The method is able to adapt to new domains with limited or no labeled data by leveraging both unlabeled data and a few human-labeled examples. The architecture combines a domain confusion loss and a softmax cross-entropy loss to train the network with the target data. The method is shown to be effective in transferring information between domains and tasks, and is able to outperform the current state-of-the-art visual domain adaptation results. The paper also provides an analysis of the method's effectiveness in enforcing domain invariance and transferring task information. The results show that the proposed method significantly improves classification performance in various domain adaptation settings, and that the combination of domain confusion and soft label losses leads to better performance than either loss alone. The method is implemented using a modified version of the standard Krizhevsky architecture, and is shown to be effective in both supervised and semi-supervised adaptation settings. The paper concludes that the proposed method is a viable alternative to traditional fine-tuning strategies when limited or no labeled data is available per category in the target domain.This paper proposes a new CNN architecture for simultaneous domain and task transfer, which outperforms existing methods on two standard visual domain adaptation tasks. The approach simultaneously optimizes for domain invariance and task transfer by using a domain confusion loss and a soft label distribution matching loss. The domain confusion loss aims to make the marginal distributions of the source and target domains as similar as possible, while the soft label loss transfers empirical category correlations from the source to the target domain. The proposed method is evaluated on the Office benchmark and a new cross-dataset collection, demonstrating superior performance in both supervised and semi-supervised adaptation settings. The method is able to adapt to new domains with limited or no labeled data by leveraging both unlabeled data and a few human-labeled examples. The architecture combines a domain confusion loss and a softmax cross-entropy loss to train the network with the target data. The method is shown to be effective in transferring information between domains and tasks, and is able to outperform the current state-of-the-art visual domain adaptation results. The paper also provides an analysis of the method's effectiveness in enforcing domain invariance and transferring task information. The results show that the proposed method significantly improves classification performance in various domain adaptation settings, and that the combination of domain confusion and soft label losses leads to better performance than either loss alone. The method is implemented using a modified version of the standard Krizhevsky architecture, and is shown to be effective in both supervised and semi-supervised adaptation settings. The paper concludes that the proposed method is a viable alternative to traditional fine-tuning strategies when limited or no labeled data is available per category in the target domain.