Graph Embedding and Extensions: A General Framework for Dimensionality Reduction

Graph Embedding and Extensions: A General Framework for Dimensionality Reduction

January 2007 | Shuicheng Yan, Member, IEEE, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Fellow, IEEE, Qiang Yang, Senior Member, IEEE, and Stephen Lin
This paper presents a general framework called graph embedding to unify various dimensionality reduction algorithms, including supervised and unsupervised methods from statistics and geometry theory. The graph embedding framework represents each algorithm as a direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes the desired statistical or geometric properties of a dataset, with constraints from scale normalization or a penalty graph that characterizes properties to be avoided. The framework is used to develop a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis (MFA), which overcomes the limitations of traditional Linear Discriminant Analysis (LDA) due to data distribution assumptions and available projection directions. MFA characterizes intraclass compactness and interclass separability through intrinsic and penalty graphs, respectively. Experimental results on real-world and synthetic face recognition datasets demonstrate the superiority of MFA over LDA and its kernel/tensor extensions.This paper presents a general framework called graph embedding to unify various dimensionality reduction algorithms, including supervised and unsupervised methods from statistics and geometry theory. The graph embedding framework represents each algorithm as a direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes the desired statistical or geometric properties of a dataset, with constraints from scale normalization or a penalty graph that characterizes properties to be avoided. The framework is used to develop a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis (MFA), which overcomes the limitations of traditional Linear Discriminant Analysis (LDA) due to data distribution assumptions and available projection directions. MFA characterizes intraclass compactness and interclass separability through intrinsic and penalty graphs, respectively. Experimental results on real-world and synthetic face recognition datasets demonstrate the superiority of MFA over LDA and its kernel/tensor extensions.
Reach us at info@study.space