Multi-Manifold Semi-Supervised Learning

Multi-Manifold Semi-Supervised Learning

2009 | Andrew B. Goldberg† Xiaojin Zhu† Aarti Singh‡ Zhiting Xu† Robert Nowak*
This paper addresses the semi-supervised learning (SSL) problem when the data consists of multiple intersecting manifolds. The authors provide a finite sample analysis to quantify the potential gain from using unlabeled data in this multi-manifold setting. They propose a novel SSL algorithm that separates different manifolds into decision sets and performs supervised learning within each set. The algorithm leverages Hellinger distance and size-constrained spectral clustering to handle overlapping and intersecting manifolds. Experiments demonstrate the effectiveness of the proposed approach, showing that it can improve performance on multi-manifold datasets compared to traditional SSL methods. The theoretical analysis is based on the cluster assumption, which states that the target function is locally smooth over subsets of the feature space delineated by changes in marginal density. The paper also discusses the conditions under which SSL can outperform supervised learning (SL) and provides conjectured bounds for the finite sample performance of SSL and SL. The proposed algorithm is evaluated on synthetic and real datasets, including handwritten digit recognition and motion segmentation tasks, and shows consistent improvements over SL when the number of unlabeled data is sufficient.This paper addresses the semi-supervised learning (SSL) problem when the data consists of multiple intersecting manifolds. The authors provide a finite sample analysis to quantify the potential gain from using unlabeled data in this multi-manifold setting. They propose a novel SSL algorithm that separates different manifolds into decision sets and performs supervised learning within each set. The algorithm leverages Hellinger distance and size-constrained spectral clustering to handle overlapping and intersecting manifolds. Experiments demonstrate the effectiveness of the proposed approach, showing that it can improve performance on multi-manifold datasets compared to traditional SSL methods. The theoretical analysis is based on the cluster assumption, which states that the target function is locally smooth over subsets of the feature space delineated by changes in marginal density. The paper also discusses the conditions under which SSL can outperform supervised learning (SL) and provides conjectured bounds for the finite sample performance of SSL and SL. The proposed algorithm is evaluated on synthetic and real datasets, including handwritten digit recognition and motion segmentation tasks, and shows consistent improvements over SL when the number of unlabeled data is sufficient.
Reach us at info@study.space