January 2007 | Shuicheng Yan, Member, IEEE, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Fellow, IEEE, Qiang Yang, Senior Member, IEEE, and Stephen Lin
This paper presents a general framework for dimensionality reduction called graph embedding, which unifies various algorithms, including supervised and unsupervised methods, under a common framework. The framework considers each algorithm as a direct graph embedding or its linear/kernel/tensor extension, where the intrinsic graph describes desired statistical or geometric properties of the data, and the penalty graph characterizes properties to be avoided. The graph embedding framework is used to develop new dimensionality reduction algorithms, such as Marginal Fisher Analysis (MFA), which addresses limitations of traditional Linear Discriminant Analysis (LDA) by considering intraclass compactness and interclass separability through intrinsic and penalty graphs. MFA outperforms LDA in face recognition experiments and is more general as it does not assume data distribution. The framework also includes kernelization and tensorization extensions, allowing for nonlinear and high-order tensor data processing. The paper demonstrates that various dimensionality reduction algorithms, including PCA, LDA, LPP, ISOMAP, LLE, and Laplacian Eigenmap, can be reformulated within the graph embedding framework. The proposed MFA, along with its kernel and tensor extensions, achieves superior performance in face recognition and synthetic data experiments compared to traditional methods.This paper presents a general framework for dimensionality reduction called graph embedding, which unifies various algorithms, including supervised and unsupervised methods, under a common framework. The framework considers each algorithm as a direct graph embedding or its linear/kernel/tensor extension, where the intrinsic graph describes desired statistical or geometric properties of the data, and the penalty graph characterizes properties to be avoided. The graph embedding framework is used to develop new dimensionality reduction algorithms, such as Marginal Fisher Analysis (MFA), which addresses limitations of traditional Linear Discriminant Analysis (LDA) by considering intraclass compactness and interclass separability through intrinsic and penalty graphs. MFA outperforms LDA in face recognition experiments and is more general as it does not assume data distribution. The framework also includes kernelization and tensorization extensions, allowing for nonlinear and high-order tensor data processing. The paper demonstrates that various dimensionality reduction algorithms, including PCA, LDA, LPP, ISOMAP, LLE, and Laplacian Eigenmap, can be reformulated within the graph embedding framework. The proposed MFA, along with its kernel and tensor extensions, achieves superior performance in face recognition and synthetic data experiments compared to traditional methods.