23 Jun 2020 | Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Senior Member, IEEE, Hui Xiong, Fellow, IEEE, and Qing He
This chapter provides a comprehensive survey of transfer learning, a machine learning methodology that aims to improve the performance of target learners by transferring knowledge from related source domains. The survey covers over forty representative transfer learning approaches, focusing more on homogeneous transfer learning, and interprets these approaches from both data and model perspectives. It also introduces applications of transfer learning and conducts experiments to evaluate the performance of different models on three datasets: Amazon Reviews, Reuters-21578, and Office-31. The survey aims to help readers understand the current research status and ideas in transfer learning, highlighting the importance of selecting appropriate models for different applications. The chapter discusses related work, including semi-supervised learning, multi-view learning, and multi-task learning, and provides a detailed overview of the notation, definitions, and categorizations of transfer learning. It then delves into data-based and model-based interpretations, explaining strategies such as instance weighting and feature transformation, and discusses various metrics and techniques for measuring distribution differences and preserving data properties. The chapter concludes with a discussion on the choice of kernel functions and the impact of kernel learning on classifier performance.This chapter provides a comprehensive survey of transfer learning, a machine learning methodology that aims to improve the performance of target learners by transferring knowledge from related source domains. The survey covers over forty representative transfer learning approaches, focusing more on homogeneous transfer learning, and interprets these approaches from both data and model perspectives. It also introduces applications of transfer learning and conducts experiments to evaluate the performance of different models on three datasets: Amazon Reviews, Reuters-21578, and Office-31. The survey aims to help readers understand the current research status and ideas in transfer learning, highlighting the importance of selecting appropriate models for different applications. The chapter discusses related work, including semi-supervised learning, multi-view learning, and multi-task learning, and provides a detailed overview of the notation, definitions, and categorizations of transfer learning. It then delves into data-based and model-based interpretations, explaining strategies such as instance weighting and feature transformation, and discusses various metrics and techniques for measuring distribution differences and preserving data properties. The chapter concludes with a discussion on the choice of kernel functions and the impact of kernel learning on classifier performance.