6 Aug 2018 | Chuanqi Tan1, Fuchun Sun2, Tao Kong1, Wenchang Zhang1, Chao Yang1, and Chunfang Liu2
This paper presents a survey on deep transfer learning, a technique that addresses the challenge of insufficient training data by transferring knowledge from a source domain to a target domain. Deep transfer learning leverages deep neural networks to improve performance in scenarios where data is scarce or expensive to collect. The paper defines deep transfer learning and categorizes it into four types: instance-based, mapping-based, network-based, and adversarial-based.
Instance-based deep transfer learning uses weighted instances from the source domain to enhance the target domain's training data. Mapping-based methods aim to align feature spaces of different domains. Network-based approaches reuse pre-trained neural network components. Adversarial-based methods use adversarial training to learn domain-invariant features.
The paper reviews recent research in each category, highlighting techniques such as TrAdaBoost, TCA, and adversarial networks. It emphasizes the importance of domain adaptation and multi-source transfer. The survey also discusses challenges like negative transfer and the need for stronger physical support for transfer knowledge. The paper concludes that deep transfer learning is a promising area with potential applications in various domains, especially as deep neural networks continue to evolve.This paper presents a survey on deep transfer learning, a technique that addresses the challenge of insufficient training data by transferring knowledge from a source domain to a target domain. Deep transfer learning leverages deep neural networks to improve performance in scenarios where data is scarce or expensive to collect. The paper defines deep transfer learning and categorizes it into four types: instance-based, mapping-based, network-based, and adversarial-based.
Instance-based deep transfer learning uses weighted instances from the source domain to enhance the target domain's training data. Mapping-based methods aim to align feature spaces of different domains. Network-based approaches reuse pre-trained neural network components. Adversarial-based methods use adversarial training to learn domain-invariant features.
The paper reviews recent research in each category, highlighting techniques such as TrAdaBoost, TCA, and adversarial networks. It emphasizes the importance of domain adaptation and multi-source transfer. The survey also discusses challenges like negative transfer and the need for stronger physical support for transfer knowledge. The paper concludes that deep transfer learning is a promising area with potential applications in various domains, especially as deep neural networks continue to evolve.