EF-Net: Mental State Recognition by Analyzing Multimodal EEG-fNIRS via CNN

EF-Net: Mental State Recognition by Analyzing Multimodal EEG-fNIRS via CNN

15 March 2024 | Aniqa Arif, Yihe Wang, Rui Yin, Xiang Zhang, Ahmed Helmy
The paper introduces EF-Net, a convolutional neural network (CNN)-based multimodal deep-learning model designed to analyze brain signals for mental state recognition. The model leverages both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) data to capture temporal and spatial features, respectively. The study evaluates EF-Net on an EEG-fNIRS word generation (WG) dataset, focusing on subject-independent analysis to assess its ability to generalize to unseen subjects. The model is compared with five baseline approaches, including traditional machine learning methods and deep learning methods. EF-Net demonstrates superior performance in both accuracy and F1 score, achieving F1 scores of 99.36%, 98.31%, and 65.05% in the subject-dependent, subject-semidependent, and subject-independent settings, respectively. These results highlight EF-Net's capability to effectively learn and interpret mental states and brain activity across different and unseen subjects. The paper also discusses the limitations and future work, including the need to improve subject-independent results and explore new datasets and advanced model architectures.The paper introduces EF-Net, a convolutional neural network (CNN)-based multimodal deep-learning model designed to analyze brain signals for mental state recognition. The model leverages both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) data to capture temporal and spatial features, respectively. The study evaluates EF-Net on an EEG-fNIRS word generation (WG) dataset, focusing on subject-independent analysis to assess its ability to generalize to unseen subjects. The model is compared with five baseline approaches, including traditional machine learning methods and deep learning methods. EF-Net demonstrates superior performance in both accuracy and F1 score, achieving F1 scores of 99.36%, 98.31%, and 65.05% in the subject-dependent, subject-semidependent, and subject-independent settings, respectively. These results highlight EF-Net's capability to effectively learn and interpret mental states and brain activity across different and unseen subjects. The paper also discusses the limitations and future work, including the need to improve subject-independent results and explore new datasets and advanced model architectures.
Reach us at info@study.space
[slides and audio] EF-Net%3A Mental State Recognition by Analyzing Multimodal EEG-fNIRS via CNN