Domain Generalization through Meta-Learning: A Survey

Domain Generalization through Meta-Learning: A Survey

22 Aug 2024 | Arsham Gholamzadeh Khoei*, Yinan Yu and Robert Feldt
This survey provides an overview of meta-learning for domain generalization (DG), focusing on its role in enabling models to generalize across unseen domains. Traditional machine learning assumes that training and testing data are identically distributed, but real-world applications often involve domain shifts, making this assumption invalid. DG aims to train models on multiple source domains so they can perform well on unseen target domains, without access to target domain data. Meta-learning offers a promising approach by learning transferable knowledge across tasks, enabling fast adaptation to new domains. The survey introduces a novel taxonomy based on feature extraction strategies and classifier learning methodologies, offering a granular view of DG methods. A decision graph is presented to help readers navigate the taxonomy based on data availability and domain shifts, enabling them to select appropriate models for their specific needs. The paper reviews existing methods and theoretical foundations, mapping out the fundamentals of the field. It highlights the importance of domain-invariant features and discriminative classifiers for effective generalization. The survey also discusses challenges such as ensuring sufficient diversity in learning tasks and handling distributional shifts. It presents a taxonomy with two axes: generalizability (feature extractor strategies) and discriminability (classifier training methods). The survey covers various meta-learning approaches for DG, including MLDG, MetaReg, Feature-Critic Networks, Episodic Training, Meta-learning Invariant Representation, MASF, S-MLDG, MetaVIB, and M-ADA. These methods aim to enhance generalization by learning robust representations and adapting to new domains. The survey concludes with a discussion of the significance of the findings, key challenges, and promising research directions in meta-learning for domain generalization.This survey provides an overview of meta-learning for domain generalization (DG), focusing on its role in enabling models to generalize across unseen domains. Traditional machine learning assumes that training and testing data are identically distributed, but real-world applications often involve domain shifts, making this assumption invalid. DG aims to train models on multiple source domains so they can perform well on unseen target domains, without access to target domain data. Meta-learning offers a promising approach by learning transferable knowledge across tasks, enabling fast adaptation to new domains. The survey introduces a novel taxonomy based on feature extraction strategies and classifier learning methodologies, offering a granular view of DG methods. A decision graph is presented to help readers navigate the taxonomy based on data availability and domain shifts, enabling them to select appropriate models for their specific needs. The paper reviews existing methods and theoretical foundations, mapping out the fundamentals of the field. It highlights the importance of domain-invariant features and discriminative classifiers for effective generalization. The survey also discusses challenges such as ensuring sufficient diversity in learning tasks and handling distributional shifts. It presents a taxonomy with two axes: generalizability (feature extractor strategies) and discriminability (classifier training methods). The survey covers various meta-learning approaches for DG, including MLDG, MetaReg, Feature-Critic Networks, Episodic Training, Meta-learning Invariant Representation, MASF, S-MLDG, MetaVIB, and M-ADA. These methods aim to enhance generalization by learning robust representations and adapting to new domains. The survey concludes with a discussion of the significance of the findings, key challenges, and promising research directions in meta-learning for domain generalization.
Reach us at info@study.space