A survey of transfer learning

A survey of transfer learning

2016 | Karl Weiss*, Taghi M. Khoshgoftaar and DingDing Wang
This survey paper provides an overview of transfer learning, a methodology that improves learning performance by transferring knowledge from a related source domain to a target domain. Traditional machine learning assumes that training and testing data come from the same domain, but in many real-world scenarios, this assumption does not hold. Transfer learning is particularly useful when target domain data is scarce, expensive to collect, or inaccessible. The paper defines transfer learning as the process of improving a target predictive function by using information from a related source domain. It discusses homogeneous and heterogeneous transfer learning, where the former involves the same input feature space across domains, and the latter involves different feature spaces. The paper also addresses negative transfer, where information from the source domain negatively affects the target learner. The paper reviews various transfer learning approaches, including instance-based, feature-based (asymmetric and symmetric), parameter-based, and relational-based methods. It highlights the importance of aligning input spaces and correcting distribution differences between domains. The paper also discusses the application of transfer learning in areas such as text sentiment classification, image classification, human activity classification, software defect classification, and multi-language text classification. The paper presents several transfer learning techniques, including conditional probability-based multi-source domain adaptation (CP-MDA), two-stage weighting framework for multi-source domain adaptation (2SW-MDA), feature augmentation, multiple kernel learning, joint domain adaptation (JDA), adaptation regularization based transfer learning (ARTL), spectral feature alignment (SFA), and discriminative clustering process (DCP). These methods aim to improve the performance of target learners by leveraging knowledge from source domains. The paper also discusses the challenges and limitations of transfer learning, including the need for labeled data, the impact of domain distribution differences, and the potential for negative transfer. It concludes with a discussion of future research directions in transfer learning, emphasizing the importance of developing more efficient and effective methods for domain adaptation and knowledge transfer. The paper provides a comprehensive overview of current transfer learning solutions and their applications, highlighting the potential of transfer learning in various machine learning tasks.This survey paper provides an overview of transfer learning, a methodology that improves learning performance by transferring knowledge from a related source domain to a target domain. Traditional machine learning assumes that training and testing data come from the same domain, but in many real-world scenarios, this assumption does not hold. Transfer learning is particularly useful when target domain data is scarce, expensive to collect, or inaccessible. The paper defines transfer learning as the process of improving a target predictive function by using information from a related source domain. It discusses homogeneous and heterogeneous transfer learning, where the former involves the same input feature space across domains, and the latter involves different feature spaces. The paper also addresses negative transfer, where information from the source domain negatively affects the target learner. The paper reviews various transfer learning approaches, including instance-based, feature-based (asymmetric and symmetric), parameter-based, and relational-based methods. It highlights the importance of aligning input spaces and correcting distribution differences between domains. The paper also discusses the application of transfer learning in areas such as text sentiment classification, image classification, human activity classification, software defect classification, and multi-language text classification. The paper presents several transfer learning techniques, including conditional probability-based multi-source domain adaptation (CP-MDA), two-stage weighting framework for multi-source domain adaptation (2SW-MDA), feature augmentation, multiple kernel learning, joint domain adaptation (JDA), adaptation regularization based transfer learning (ARTL), spectral feature alignment (SFA), and discriminative clustering process (DCP). These methods aim to improve the performance of target learners by leveraging knowledge from source domains. The paper also discusses the challenges and limitations of transfer learning, including the need for labeled data, the impact of domain distribution differences, and the potential for negative transfer. It concludes with a discussion of future research directions in transfer learning, emphasizing the importance of developing more efficient and effective methods for domain adaptation and knowledge transfer. The paper provides a comprehensive overview of current transfer learning solutions and their applications, highlighting the potential of transfer learning in various machine learning tasks.
Reach us at info@study.space
Understanding A survey of transfer learning