UNCERTAINTY-GUIDED CONTRASTIVE LEARNING FOR SINGLE SOURCE DOMAIN GENERALISATION

UNCERTAINTY-GUIDED CONTRASTIVE LEARNING FOR SINGLE SOURCE DOMAIN GENERALISATION

14 Mar 2024 | Anastasios Arsenos¹, Dimitrios Kollias², Evangelos Petrongonas, Christos Skiros³, Stefanos Kollias¹
This paper introduces a novel framework called Contrastive Uncertainty Domain Generalisation Network (CUDGNet) for single-source domain generalisation. The key idea is to enhance the source capacity in both input and label spaces through a fictitious domain generator and jointly learn domain-invariant representations of each class using contrastive learning. The framework consists of a task model M and a domain augmentation generator G, which collaborate to generate secure and efficient domains. The domain augmentation generator produces domains guided by uncertainty assessment, which are systematically extended to enhance coverage and comprehensiveness. Contrastive learning is introduced in the learning process of the task model M to achieve cross-domain invariant representations. The main contributions of this work are: (1) proposing a novel framework that leverages adversarial data augmentation and style transfer for domain expansion while ensuring semantic information preservation through contrastive learning; (2) our framework can estimate uncertainty in a single forward pass while achieving state-of-the-art accuracy; and (3) validating our framework's performance via comparison and ablation study on two SSDG datasets. The framework includes a transformation component (TC) that transforms the initial image x from the original domain S into a novel image within the same domain. The domain augmentation generator produces new domains while retaining class-specific details. The framework also incorporates style manipulation and exact feature distribution mixing (EFDMix) to enrich the input space. Contrastive learning is used to acquire representations that are invariant to domain shifts and avoid representation collapse caused by extreme domain shifts of feature perturbations. Extensive experiments on two SSDG datasets, CIFA-10-C and PACS, demonstrate the effectiveness of our approach. The results show that CUDGNet achieves the highest average accuracy compared to other methods. The framework also provides efficient uncertainty estimation at inference time from a single forward pass through the generator subnetwork. The results demonstrate that our uncertainty estimation aligns with Bayesian uncertainty estimation and is significantly faster. The ablation study shows that incorporating the transformation component and style transfer significantly improves the model's performance. Finally, with the integration of contrastive loss, the model achieves a new state-of-the-art performance.This paper introduces a novel framework called Contrastive Uncertainty Domain Generalisation Network (CUDGNet) for single-source domain generalisation. The key idea is to enhance the source capacity in both input and label spaces through a fictitious domain generator and jointly learn domain-invariant representations of each class using contrastive learning. The framework consists of a task model M and a domain augmentation generator G, which collaborate to generate secure and efficient domains. The domain augmentation generator produces domains guided by uncertainty assessment, which are systematically extended to enhance coverage and comprehensiveness. Contrastive learning is introduced in the learning process of the task model M to achieve cross-domain invariant representations. The main contributions of this work are: (1) proposing a novel framework that leverages adversarial data augmentation and style transfer for domain expansion while ensuring semantic information preservation through contrastive learning; (2) our framework can estimate uncertainty in a single forward pass while achieving state-of-the-art accuracy; and (3) validating our framework's performance via comparison and ablation study on two SSDG datasets. The framework includes a transformation component (TC) that transforms the initial image x from the original domain S into a novel image within the same domain. The domain augmentation generator produces new domains while retaining class-specific details. The framework also incorporates style manipulation and exact feature distribution mixing (EFDMix) to enrich the input space. Contrastive learning is used to acquire representations that are invariant to domain shifts and avoid representation collapse caused by extreme domain shifts of feature perturbations. Extensive experiments on two SSDG datasets, CIFA-10-C and PACS, demonstrate the effectiveness of our approach. The results show that CUDGNet achieves the highest average accuracy compared to other methods. The framework also provides efficient uncertainty estimation at inference time from a single forward pass through the generator subnetwork. The results demonstrate that our uncertainty estimation aligns with Bayesian uncertainty estimation and is significantly faster. The ablation study shows that incorporating the transformation component and style transfer significantly improves the model's performance. Finally, with the integration of contrastive loss, the model achieves a new state-of-the-art performance.
Reach us at info@study.space
Understanding Uncertainty-Guided Contrastive Learning For Single Source Domain Generalisation