This paper proposes a lightweight Multi-Feature Attention Neural Network (M-FANet) for motor imagery (MI) decoding from electroencephalogram (EEG) signals. M-FANet integrates multiple attention modules to extract and calibrate frequency domain, local spatial, and feature map information, enhancing the model's performance in MI classification. The model also employs a training method called Regularized Dropout (R-Drop) to address training-inference inconsistency caused by dropout, improving generalization. The proposed M-FANet achieves superior performance on two datasets: the BCIC-IV-2a dataset with 79.28% 4-class classification accuracy (kappa: 0.7259) and the WBCIC-MI dataset with 77.86% 3-class classification accuracy (kappa: 0.6650). The model's effectiveness is validated through ablation studies and visualizations. M-FANet outperforms state-of-the-art MI decoding methods, demonstrating the importance of multi-feature attention and R-Drop in enhancing performance. The model's lightweight architecture and efficient resource usage make it suitable for real-world applications. The study also discusses the impact of the regularization parameter α on model performance and highlights the significance of attention mechanisms in MI-EEG decoding. The results show that M-FANet achieves high classification accuracy while maintaining low computational and memory requirements, making it a promising solution for MI-based brain-computer interfaces (BCIs).This paper proposes a lightweight Multi-Feature Attention Neural Network (M-FANet) for motor imagery (MI) decoding from electroencephalogram (EEG) signals. M-FANet integrates multiple attention modules to extract and calibrate frequency domain, local spatial, and feature map information, enhancing the model's performance in MI classification. The model also employs a training method called Regularized Dropout (R-Drop) to address training-inference inconsistency caused by dropout, improving generalization. The proposed M-FANet achieves superior performance on two datasets: the BCIC-IV-2a dataset with 79.28% 4-class classification accuracy (kappa: 0.7259) and the WBCIC-MI dataset with 77.86% 3-class classification accuracy (kappa: 0.6650). The model's effectiveness is validated through ablation studies and visualizations. M-FANet outperforms state-of-the-art MI decoding methods, demonstrating the importance of multi-feature attention and R-Drop in enhancing performance. The model's lightweight architecture and efficient resource usage make it suitable for real-world applications. The study also discusses the impact of the regularization parameter α on model performance and highlights the significance of attention mechanisms in MI-EEG decoding. The results show that M-FANet achieves high classification accuracy while maintaining low computational and memory requirements, making it a promising solution for MI-based brain-computer interfaces (BCIs).