Wasserstein Distance Guided Representation Learning for Domain Adaptation

Wasserstein Distance Guided Representation Learning for Domain Adaptation

2018 | Jian Shen, Yanru Qu, Weinan Zhang, Yong Yu
This paper proposes a novel approach called Wasserstein Distance Guided Representation Learning (WD-GRL) for domain adaptation. WD-GRL aims to learn domain invariant feature representations that are both discriminative and transferable. The method is inspired by Wasserstein GANs and uses a neural network, called the domain critic, to estimate the empirical Wasserstein distance between source and target samples. The feature extractor network is then optimized to minimize this distance in an adversarial manner. The theoretical advantages of Wasserstein distance include its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets show that WD-GRL outperforms state-of-the-art domain invariant representation learning approaches. WD-GRL combines domain adaptation with a discriminator to learn discriminative and transferable features. The discriminator is trained to distinguish between source and target features, while the feature extractor is trained to minimize the Wasserstein distance. The method is effective in reducing domain discrepancy and achieving high performance on target domains. Theoretical analysis shows that WD-GRL has gradient superiority and a generalization bound, making it more reliable than previous adversarial adaptation methods. WD-GRL can be integrated into existing domain adaptation frameworks and has shown promising results on sentiment and image classification datasets. The method is also effective in small-scale datasets and can be applied to various adaptation tasks. The results demonstrate that WD-GRL outperforms other approaches in terms of accuracy and transferability.This paper proposes a novel approach called Wasserstein Distance Guided Representation Learning (WD-GRL) for domain adaptation. WD-GRL aims to learn domain invariant feature representations that are both discriminative and transferable. The method is inspired by Wasserstein GANs and uses a neural network, called the domain critic, to estimate the empirical Wasserstein distance between source and target samples. The feature extractor network is then optimized to minimize this distance in an adversarial manner. The theoretical advantages of Wasserstein distance include its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets show that WD-GRL outperforms state-of-the-art domain invariant representation learning approaches. WD-GRL combines domain adaptation with a discriminator to learn discriminative and transferable features. The discriminator is trained to distinguish between source and target features, while the feature extractor is trained to minimize the Wasserstein distance. The method is effective in reducing domain discrepancy and achieving high performance on target domains. Theoretical analysis shows that WD-GRL has gradient superiority and a generalization bound, making it more reliable than previous adversarial adaptation methods. WD-GRL can be integrated into existing domain adaptation frameworks and has shown promising results on sentiment and image classification datasets. The method is also effective in small-scale datasets and can be applied to various adaptation tasks. The results demonstrate that WD-GRL outperforms other approaches in terms of accuracy and transferability.
Reach us at info@study.space