Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation

Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation

5 Mar 2024 | Zhekai Du, Xinyao Li, Fengling Li, Ke Lu, Lei Zhu, Jingjing Li
The paper introduces Domain-Agnostic Mutual Prompting (DAMP), a novel framework for unsupervised domain adaptation (UDA) that leverages large-scale pre-trained vision-language models (VLMs) to exploit domain-invariant semantics. DAMP addresses the limitations of traditional UDA methods, which often fail to effectively utilize rich semantics and struggle with complex domain shifts. By jointly optimizing textual and visual prompts, DAMP aligns visual and textual embeddings across domains, enabling better feature alignment and cross-domain knowledge transfer. The method uses a cross-attention mechanism to exchange information between the two modalities, ensuring that the learned prompts are domain-agnostic and instance-conditioned. Regularizations, including a semantic-consistency loss and an instance-discrimination contrastive loss, are applied to ensure the learned prompts are pure and effective. Extensive experiments on three UDA benchmarks demonstrate that DAMP outperforms state-of-the-art approaches, showing significant improvements in classification accuracy and domain adaptation performance.The paper introduces Domain-Agnostic Mutual Prompting (DAMP), a novel framework for unsupervised domain adaptation (UDA) that leverages large-scale pre-trained vision-language models (VLMs) to exploit domain-invariant semantics. DAMP addresses the limitations of traditional UDA methods, which often fail to effectively utilize rich semantics and struggle with complex domain shifts. By jointly optimizing textual and visual prompts, DAMP aligns visual and textual embeddings across domains, enabling better feature alignment and cross-domain knowledge transfer. The method uses a cross-attention mechanism to exchange information between the two modalities, ensuring that the learned prompts are domain-agnostic and instance-conditioned. Regularizations, including a semantic-consistency loss and an instance-discrimination contrastive loss, are applied to ensure the learned prompts are pure and effective. Extensive experiments on three UDA benchmarks demonstrate that DAMP outperforms state-of-the-art approaches, showing significant improvements in classification accuracy and domain adaptation performance.
Reach us at info@study.space