Multi-task learning (MTL) is a promising area in machine learning that aims to improve the performance of multiple related learning tasks by leveraging useful information among them. This paper provides an overview of MTL, discussing various settings such as multi-task supervised learning, multi-task unsupervised learning, multi-task semi-supervised learning, multi-task active learning, multi-task reinforcement learning, multi-task online learning, and multi-task multi-view learning. For each setting, representative MTL models are presented. To speed up the learning process, parallel and distributed MTL models are introduced. MTL has been applied to various areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications, and ubiquitous computing. Recent theoretical analyses of MTL are also reviewed.
MTL is related to other areas in machine learning, including transfer learning, multi-label learning, and multi-output regression, but exhibits different characteristics. For example, similar to MTL, transfer learning also aims to transfer knowledge from one task to another, but the difference lies in that transfer learning hopes to use one or more tasks to help a target task while MTL uses multiple tasks to help each other. When different tasks in multi-task supervised learning share the training data, it becomes multi-label learning or multi-output regression. In this sense, MTL can be viewed as a generalization of multi-label learning and multi-output regression.
The paper first introduces the definition of MTL, then discusses different settings of MTL, including multi-task supervised learning, multi-task unsupervised learning, multi-task semi-supervised learning, multi-task active learning, multi-task reinforcement learning, multi-task online learning, and multi-task multi-view learning. For each setting, representative MTL models are presented. When the number of tasks is large or data in different tasks are located in different machines, parallel and distributed MTL models become necessary and several models are introduced. As a promising learning paradigm, MTL has been applied to several areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications, and ubiquitous computing, and several representative applications in each area are presented. Moreover, theoretical analyses for MTL, which can give us a deep understanding of MTL, are reviewed.Multi-task learning (MTL) is a promising area in machine learning that aims to improve the performance of multiple related learning tasks by leveraging useful information among them. This paper provides an overview of MTL, discussing various settings such as multi-task supervised learning, multi-task unsupervised learning, multi-task semi-supervised learning, multi-task active learning, multi-task reinforcement learning, multi-task online learning, and multi-task multi-view learning. For each setting, representative MTL models are presented. To speed up the learning process, parallel and distributed MTL models are introduced. MTL has been applied to various areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications, and ubiquitous computing. Recent theoretical analyses of MTL are also reviewed.
MTL is related to other areas in machine learning, including transfer learning, multi-label learning, and multi-output regression, but exhibits different characteristics. For example, similar to MTL, transfer learning also aims to transfer knowledge from one task to another, but the difference lies in that transfer learning hopes to use one or more tasks to help a target task while MTL uses multiple tasks to help each other. When different tasks in multi-task supervised learning share the training data, it becomes multi-label learning or multi-output regression. In this sense, MTL can be viewed as a generalization of multi-label learning and multi-output regression.
The paper first introduces the definition of MTL, then discusses different settings of MTL, including multi-task supervised learning, multi-task unsupervised learning, multi-task semi-supervised learning, multi-task active learning, multi-task reinforcement learning, multi-task online learning, and multi-task multi-view learning. For each setting, representative MTL models are presented. When the number of tasks is large or data in different tasks are located in different machines, parallel and distributed MTL models become necessary and several models are introduced. As a promising learning paradigm, MTL has been applied to several areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications, and ubiquitous computing, and several representative applications in each area are presented. Moreover, theoretical analyses for MTL, which can give us a deep understanding of MTL, are reviewed.