M-FANet: Multi-Feature Attention Convolutional Neural Network for Motor Imagery Decoding

M-FANet: Multi-Feature Attention Convolutional Neural Network for Motor Imagery Decoding

2024 | Yiyang Qin, Banghua Yang, Sixiong Ke, Peng Liu, Fenqi Rong, Xinxing Xia
The paper introduces M-FANet, a lightweight Multi-Feature Attention Convolutional Neural Network designed for motor imagery (MI) decoding. M-FANet incorporates multiple attention modules to extract and select spectral, spatial, and temporal features from EEG signals. These modules help eliminate redundant information, enhance local spatial feature extraction, and calibrate feature maps. The paper also introduces R-Drop, a training method that addresses the inconsistency between training and inference caused by dropout, improving the model's generalization capability. Extensive experiments on the BCIC-IV-2a and WBCIC-MI datasets show that M-FANet achieves superior performance compared to state-of-the-art methods, with 79.28% 4-class classification accuracy on the BCIC-IV-2a dataset and 77.86% 3-class classification accuracy on the WBCIC-MI dataset. The effectiveness of the multi-feature attention modules and R-Drop is validated through ablation studies and visualizations. The proposed method demonstrates promising potential for MI-based BCI research and applications.The paper introduces M-FANet, a lightweight Multi-Feature Attention Convolutional Neural Network designed for motor imagery (MI) decoding. M-FANet incorporates multiple attention modules to extract and select spectral, spatial, and temporal features from EEG signals. These modules help eliminate redundant information, enhance local spatial feature extraction, and calibrate feature maps. The paper also introduces R-Drop, a training method that addresses the inconsistency between training and inference caused by dropout, improving the model's generalization capability. Extensive experiments on the BCIC-IV-2a and WBCIC-MI datasets show that M-FANet achieves superior performance compared to state-of-the-art methods, with 79.28% 4-class classification accuracy on the BCIC-IV-2a dataset and 77.86% 3-class classification accuracy on the WBCIC-MI dataset. The effectiveness of the multi-feature attention modules and R-Drop is validated through ablation studies and visualizations. The proposed method demonstrates promising potential for MI-based BCI research and applications.
Reach us at info@study.space
[slides] M-FANet%3A Multi-Feature Attention Convolutional Neural Network for Motor Imagery Decoding | StudySpace