Moment Matching for Multi-Source Domain Adaptation

Moment Matching for Multi-Source Domain Adaptation

27 Aug 2019 | Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, Bo Wang
This paper introduces DomainNet, the largest multi-domain dataset for domain adaptation, containing six domains, 345 categories, and approximately 0.6 million images. The authors propose M³SDA, a novel deep learning approach for multi-source domain adaptation that aligns the moments of feature distributions across multiple source domains and the target domain. The method is designed to transfer knowledge from multiple labeled source domains to an unlabeled target domain by dynamically aligning their feature distributions. The authors also provide new theoretical insights into moment matching approaches for both single and multi-source domain adaptation. Extensive experiments demonstrate that their dataset and model outperform existing methods in benchmarking multi-source domain adaptation. The dataset and code are available at http://ai.bu.edu/M3SDA/. The paper also discusses the challenges of multi-source domain adaptation, including the need for large-scale datasets and the difficulty of aligning multiple source domains. The authors propose a moment-matching approach that aligns the moments of feature distributions, which is shown to be effective for multi-source domain adaptation. Theoretical analysis is provided to support the effectiveness of the approach. The paper also evaluates the performance of their model on various tasks, including digit classification and image recognition, showing that their model achieves state-of-the-art results.This paper introduces DomainNet, the largest multi-domain dataset for domain adaptation, containing six domains, 345 categories, and approximately 0.6 million images. The authors propose M³SDA, a novel deep learning approach for multi-source domain adaptation that aligns the moments of feature distributions across multiple source domains and the target domain. The method is designed to transfer knowledge from multiple labeled source domains to an unlabeled target domain by dynamically aligning their feature distributions. The authors also provide new theoretical insights into moment matching approaches for both single and multi-source domain adaptation. Extensive experiments demonstrate that their dataset and model outperform existing methods in benchmarking multi-source domain adaptation. The dataset and code are available at http://ai.bu.edu/M3SDA/. The paper also discusses the challenges of multi-source domain adaptation, including the need for large-scale datasets and the difficulty of aligning multiple source domains. The authors propose a moment-matching approach that aligns the moments of feature distributions, which is shown to be effective for multi-source domain adaptation. Theoretical analysis is provided to support the effectiveness of the approach. The paper also evaluates the performance of their model on various tasks, including digit classification and image recognition, showing that their model achieves state-of-the-art results.
Reach us at info@study.space