27 Aug 2019 | Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, Bo Wang
This paper addresses the challenge of multi-source domain adaptation (MSDA), a practical scenario where training data is collected from multiple domains. The authors make three major contributions: (1) they collect and annotate the largest UDA dataset, DomainNet, which contains six domains and about 0.6 million images distributed among 345 categories; (2) they propose M³SDA, a deep learning approach that transfers knowledge from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions; and (3) they provide new theoretical insights for moment matching approaches in both single and multiple source domain adaptation. Extensive experiments demonstrate the effectiveness of the proposed dataset and model, showing superior performance over state-of-the-art methods. The dataset and code are available at <http://ai.bu.edu/M3SDA/>.This paper addresses the challenge of multi-source domain adaptation (MSDA), a practical scenario where training data is collected from multiple domains. The authors make three major contributions: (1) they collect and annotate the largest UDA dataset, DomainNet, which contains six domains and about 0.6 million images distributed among 345 categories; (2) they propose M³SDA, a deep learning approach that transfers knowledge from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions; and (3) they provide new theoretical insights for moment matching approaches in both single and multiple source domain adaptation. Extensive experiments demonstrate the effectiveness of the proposed dataset and model, showing superior performance over state-of-the-art methods. The dataset and code are available at <http://ai.bu.edu/M3SDA/>.