A Survey on Multi-Task Learning

A Survey on Multi-Task Learning

29 Mar 2021 | Yu Zhang and Qiang Yang
This paper provides a comprehensive survey of Multi-Task Learning (MTL), covering algorithmic modeling, applications, and theoretical analyses. MTL is a learning paradigm that leverages information from multiple related tasks to improve the generalization performance of all tasks. The paper classifies MTL algorithms into five categories: feature learning, low-rank, task clustering, task relation learning, and decomposition approaches. It discusses how MTL can be combined with other learning paradigms such as semi-supervised learning, active learning, and reinforcement learning. The paper also reviews online, parallel, and distributed MTL models, as well as dimensionality reduction and feature hashing techniques for handling large-scale tasks and high-dimensional data. Real-world applications of MTL are reviewed, including areas such as computer vision, bioinformatics, and NLP. Theoretical analyses of MTL are also presented, along with future research directions. The paper highlights the differences between MTL and other learning paradigms such as multi-label learning, multi-output regression, and multi-view learning. It discusses the key aspects of MTL, including when, what, and how to share knowledge among tasks. The paper also presents various MTL models, including feature learning approaches, low-rank approaches, task clustering approaches, and task relation learning approaches. The paper concludes with a discussion of future directions for MTL research.This paper provides a comprehensive survey of Multi-Task Learning (MTL), covering algorithmic modeling, applications, and theoretical analyses. MTL is a learning paradigm that leverages information from multiple related tasks to improve the generalization performance of all tasks. The paper classifies MTL algorithms into five categories: feature learning, low-rank, task clustering, task relation learning, and decomposition approaches. It discusses how MTL can be combined with other learning paradigms such as semi-supervised learning, active learning, and reinforcement learning. The paper also reviews online, parallel, and distributed MTL models, as well as dimensionality reduction and feature hashing techniques for handling large-scale tasks and high-dimensional data. Real-world applications of MTL are reviewed, including areas such as computer vision, bioinformatics, and NLP. Theoretical analyses of MTL are also presented, along with future research directions. The paper highlights the differences between MTL and other learning paradigms such as multi-label learning, multi-output regression, and multi-view learning. It discusses the key aspects of MTL, including when, what, and how to share knowledge among tasks. The paper also presents various MTL models, including feature learning approaches, low-rank approaches, task clustering approaches, and task relation learning approaches. The paper concludes with a discussion of future directions for MTL research.
Reach us at info@study.space
[slides and audio] A Survey on Multi-Task Learning