Efficient and Robust Feature Extraction by Maximum Margin Criterion

Efficient and Robust Feature Extraction by Maximum Margin Criterion

| Haifeng Li* and Tao Jiang, Keshu Zhang
The paper introduces a new feature extraction method based on the maximum margin criterion (MMC), which aims to maximize the margin between classes after dimensionality reduction. This approach is proposed as an alternative to traditional methods like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which have limitations such as suboptimal feature selection and instability due to the small sample size problem. The authors derive LDA from MMC by incorporating constraints and establish new linear and nonlinear feature extractors that avoid the small sample size problem. The linear feature extractor is derived by solving an eigenvalue problem, while the nonlinear extractor is achieved through kernelization. Experimental results on various datasets, including the Iris dataset, vehicle dataset, ORL face dataset, and brain cancer gene expression data, demonstrate the effectiveness and robustness of the proposed methods. The methods show competitive performance with LDA and superior results compared to LDA+PCA and kernel PCA, especially in scenarios with limited training data. The paper concludes by highlighting the advantages of the proposed methods and suggesting future research directions, such as developing faster algorithms for the nonlinear feature extractor.The paper introduces a new feature extraction method based on the maximum margin criterion (MMC), which aims to maximize the margin between classes after dimensionality reduction. This approach is proposed as an alternative to traditional methods like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which have limitations such as suboptimal feature selection and instability due to the small sample size problem. The authors derive LDA from MMC by incorporating constraints and establish new linear and nonlinear feature extractors that avoid the small sample size problem. The linear feature extractor is derived by solving an eigenvalue problem, while the nonlinear extractor is achieved through kernelization. Experimental results on various datasets, including the Iris dataset, vehicle dataset, ORL face dataset, and brain cancer gene expression data, demonstrate the effectiveness and robustness of the proposed methods. The methods show competitive performance with LDA and superior results compared to LDA+PCA and kernel PCA, especially in scenarios with limited training data. The paper concludes by highlighting the advantages of the proposed methods and suggesting future research directions, such as developing faster algorithms for the nonlinear feature extractor.
Reach us at info@study.space
[slides and audio] Efficient and robust feature extraction by maximum margin criterion