11 Jan 2024 | Yuqi Chen, Kan Ren, Kaitao Song, Yansen Wang, Yifan Wang, Dongsheng Li, Lili Qiu
The paper introduces EEGFORMER, a novel pretraining model for electroencephalography (EEG) data, designed to leverage large-scale unlabeled data and improve interpretability. EEGFORMER uses a vector-quantized Transformer model to learn universal representations of EEG signals, which can be adapted for various downstream tasks. The model is trained on the Temple University EEG Corpus (TUH Corpus), a dataset containing over 1.7TB of unlabelled EEG data. The paper evaluates EEGFORMER on multiple downstream tasks, including seizure detection, abnormal event detection, and emotion recognition, demonstrating its effectiveness and transferability. The model's interpretability is also highlighted through the analysis of learned codebooks and the identification of useful patterns in the data. Experimental results show that EEGFORMER outperforms existing methods in terms of accuracy and interpretability, making it a promising tool for EEG data analysis and interpretation.The paper introduces EEGFORMER, a novel pretraining model for electroencephalography (EEG) data, designed to leverage large-scale unlabeled data and improve interpretability. EEGFORMER uses a vector-quantized Transformer model to learn universal representations of EEG signals, which can be adapted for various downstream tasks. The model is trained on the Temple University EEG Corpus (TUH Corpus), a dataset containing over 1.7TB of unlabelled EEG data. The paper evaluates EEGFORMER on multiple downstream tasks, including seizure detection, abnormal event detection, and emotion recognition, demonstrating its effectiveness and transferability. The model's interpretability is also highlighted through the analysis of learned codebooks and the identification of useful patterns in the data. Experimental results show that EEGFORMER outperforms existing methods in terms of accuracy and interpretability, making it a promising tool for EEG data analysis and interpretation.