A Survey on Multi-view Learning

A Survey on Multi-view Learning

20 Apr 2013 | Chang Xu, Dacheng Tao, Chao Xu
A survey on multi-view learning discusses various methods for learning from data represented by multiple views. Multi-view learning aims to improve learning performance by leveraging the consistency and complementary information across different views. The paper classifies multi-view learning algorithms into three categories: co-training, multiple kernel learning, and subspace learning. Co-training algorithms train models on distinct views to maximize agreement, while multiple kernel learning combines kernels derived from different views to enhance learning. Subspace learning aims to find a shared latent space that captures the common structure of multiple views. The paper highlights the importance of both the consensus principle (maximizing agreement between views) and the complementary principle (utilizing unique information from each view) in multi-view learning. It also discusses the challenges of view generation, including constructing multiple views from a single source and evaluating their effectiveness. Techniques such as random subspace methods, feature decomposition, and kernel-based approaches are presented for view construction. View evaluation involves assessing compatibility, sufficiency, and noise robustness to ensure effective multi-view learning. The paper also explores methods for combining multiple views, including concatenation, co-training, and kernel combination. Linear and nonlinear kernel combination methods are discussed, with an emphasis on optimizing kernel weights to improve learning performance. The survey concludes that multi-view learning offers significant advantages over single-view learning, including better generalization and improved performance through the integration of diverse views.A survey on multi-view learning discusses various methods for learning from data represented by multiple views. Multi-view learning aims to improve learning performance by leveraging the consistency and complementary information across different views. The paper classifies multi-view learning algorithms into three categories: co-training, multiple kernel learning, and subspace learning. Co-training algorithms train models on distinct views to maximize agreement, while multiple kernel learning combines kernels derived from different views to enhance learning. Subspace learning aims to find a shared latent space that captures the common structure of multiple views. The paper highlights the importance of both the consensus principle (maximizing agreement between views) and the complementary principle (utilizing unique information from each view) in multi-view learning. It also discusses the challenges of view generation, including constructing multiple views from a single source and evaluating their effectiveness. Techniques such as random subspace methods, feature decomposition, and kernel-based approaches are presented for view construction. View evaluation involves assessing compatibility, sufficiency, and noise robustness to ensure effective multi-view learning. The paper also explores methods for combining multiple views, including concatenation, co-training, and kernel combination. Linear and nonlinear kernel combination methods are discussed, with an emphasis on optimizing kernel weights to improve learning performance. The survey concludes that multi-view learning offers significant advantages over single-view learning, including better generalization and improved performance through the integration of diverse views.
Reach us at info@study.space
[slides and audio] A Survey on Multi-view Learning