DOMAIN GENERALIZATION VIA INVARIANT FEATURE REPRESENTATION

DOMAIN GENERALIZATION VIA INVARIANT FEATURE REPRESENTATION

10 Jan 2013 | KRIKAMOL MUANDET, DAVID BALDUZZI, AND BERNHARD SCHÖLKOPF
This paper addresses the challenge of domain generalization, which involves applying knowledge learned from multiple related domains to previously unseen domains. The authors propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation to minimize dissimilarity across domains while preserving the functional relationship between input and output variables. Theoretical analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains. Experimental results on synthetic and real-world datasets demonstrate that DICA effectively learns invariant features and enhances classifier performance. DICA is shown to be related to other dimensionality reduction techniques such as kernel principal component analysis (KPCA) and transfer component analysis (TCA). The paper also provides a learning-theoretic bound on the generalization error of classifiers trained after DICA preprocessing, highlighting the trade-off between reducing distributional variance and the complexity of the transformation.This paper addresses the challenge of domain generalization, which involves applying knowledge learned from multiple related domains to previously unseen domains. The authors propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation to minimize dissimilarity across domains while preserving the functional relationship between input and output variables. Theoretical analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains. Experimental results on synthetic and real-world datasets demonstrate that DICA effectively learns invariant features and enhances classifier performance. DICA is shown to be related to other dimensionality reduction techniques such as kernel principal component analysis (KPCA) and transfer component analysis (TCA). The paper also provides a learning-theoretic bound on the generalization error of classifiers trained after DICA preprocessing, highlighting the trade-off between reducing distributional variance and the complexity of the transformation.
Reach us at info@study.space
Understanding Domain Generalization via Invariant Feature Representation