Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

3 Apr 2018 | Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada
This paper proposes a new method for unsupervised domain adaptation (UDA) that aligns source and target distributions by considering task-specific decision boundaries. The method introduces a novel adversarial learning approach that uses two classifiers and a feature generator. The classifiers are trained to maximize the discrepancy between their outputs on target samples, while the generator is trained to minimize this discrepancy. This adversarial training process allows the generator to produce features that are aligned with the source distribution, leading to improved performance on target samples. The method is evaluated on several datasets, including image classification and semantic segmentation tasks. It outperforms existing methods in most cases, particularly in scenarios where the target domain has a large domain divergence from the source. The method is also effective in semantic segmentation tasks, where it improves performance even when there is a significant difference between synthetic and real-world images. Theoretical insights are provided, showing that the method aligns with the theory proposed by Ben-David et al., which bounds the expected error on the target domain based on the source domain error and domain divergence. The method's effectiveness is demonstrated through experiments on various datasets, including toy problems, digit classification, and semantic segmentation. The results show that the method achieves higher accuracy and better alignment of features between source and target domains compared to other methods. The method is also shown to be effective in reducing the discrepancy between classifiers on target samples, leading to more accurate classification.This paper proposes a new method for unsupervised domain adaptation (UDA) that aligns source and target distributions by considering task-specific decision boundaries. The method introduces a novel adversarial learning approach that uses two classifiers and a feature generator. The classifiers are trained to maximize the discrepancy between their outputs on target samples, while the generator is trained to minimize this discrepancy. This adversarial training process allows the generator to produce features that are aligned with the source distribution, leading to improved performance on target samples. The method is evaluated on several datasets, including image classification and semantic segmentation tasks. It outperforms existing methods in most cases, particularly in scenarios where the target domain has a large domain divergence from the source. The method is also effective in semantic segmentation tasks, where it improves performance even when there is a significant difference between synthetic and real-world images. Theoretical insights are provided, showing that the method aligns with the theory proposed by Ben-David et al., which bounds the expected error on the target domain based on the source domain error and domain divergence. The method's effectiveness is demonstrated through experiments on various datasets, including toy problems, digit classification, and semantic segmentation. The results show that the method achieves higher accuracy and better alignment of features between source and target domains compared to other methods. The method is also shown to be effective in reducing the discrepancy between classifiers on target samples, leading to more accurate classification.
Reach us at info@study.space
Understanding Maximum Classifier Discrepancy for Unsupervised Domain Adaptation