24 May 2022 | Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Senior Member, IEEE, Wang Lu, Yiqiang Chen, Senior Member, IEEE, Wenjun Zeng, Fellow, IEEE, Philip S. Yu, Fellow, IEEE
Domain generalization (DG) aims to learn models that can generalize to unseen test domains, addressing the challenge of different but related distributions in training data. This survey reviews recent advances in DG, discussing its formulations, theories, algorithms, datasets, applications, and future directions. DG is distinct from domain adaptation (DA) as it does not access the target domain during training, making it more challenging but realistic. Key areas related to DG include transfer learning, multi-task learning, meta-learning, and zero-shot learning. Theories of DG involve bounding target risks using source risks and analyzing domain-invariant representations. DG methods are categorized into data manipulation, representation learning, and learning strategies. Data manipulation techniques include data augmentation and generation, while representation learning focuses on domain-invariant features and feature disentanglement. Learning strategies involve ensemble learning, meta-learning, gradient operations, and distributionally robust optimization. The survey also introduces benchmark datasets and open-source code for DG research, highlighting the importance of domain information in enhancing generalization. Future research directions include improving theoretical understanding, exploring new algorithms, and expanding applications across domains.Domain generalization (DG) aims to learn models that can generalize to unseen test domains, addressing the challenge of different but related distributions in training data. This survey reviews recent advances in DG, discussing its formulations, theories, algorithms, datasets, applications, and future directions. DG is distinct from domain adaptation (DA) as it does not access the target domain during training, making it more challenging but realistic. Key areas related to DG include transfer learning, multi-task learning, meta-learning, and zero-shot learning. Theories of DG involve bounding target risks using source risks and analyzing domain-invariant representations. DG methods are categorized into data manipulation, representation learning, and learning strategies. Data manipulation techniques include data augmentation and generation, while representation learning focuses on domain-invariant features and feature disentanglement. Learning strategies involve ensemble learning, meta-learning, gradient operations, and distributionally robust optimization. The survey also introduces benchmark datasets and open-source code for DG research, highlighting the importance of domain information in enhancing generalization. Future research directions include improving theoretical understanding, exploring new algorithms, and expanding applications across domains.