11 Jan 2024 | Yuqi Chen, Kan Ren, Kaitao Song, Yansen Wang, Yifan Wang, Dongsheng Li, Lili Qiu
EEGFormer is a novel large-scale EEG foundation model that leverages self-supervised learning for transferable and interpretable EEG signal processing. The model is pretrained on a massive 1.7TB EEG dataset from the TUH Corpus, enabling it to learn universal representations of EEG signals that can be adapted to various downstream tasks. EEGFormer uses a vector-quantized Transformer architecture to generate discrete representations, which allows for interpretable outcomes by identifying useful patterns in the data. The model is evaluated on multiple downstream tasks, including seizure detection, abnormal detection, and emotion recognition, demonstrating its effectiveness in both in-dataset and transfer settings. The model's performance is further enhanced through fine-tuning, and it shows strong transferability in detecting anomalies and providing interpretable results. EEGFormer's approach addresses the limitations of existing methods by utilizing large-scale unlabeled data and providing interpretable representations through discrete codebook learning. The model's architecture includes preprocessing, slicing, encoding, decoding, and training components, with a focus on temporal patterns in multi-channel EEG data. The results show that EEGFormer outperforms several baselines, including EEGNet, TCN, and EEG-GNN, in terms of detection accuracy and interpretability. The model's ability to provide interpretable results is demonstrated through analysis of the learned codebook and its application in seizure localization. Overall, EEGFormer represents a significant advancement in EEG signal processing, offering a transferable and interpretable foundation model for various healthcare applications.EEGFormer is a novel large-scale EEG foundation model that leverages self-supervised learning for transferable and interpretable EEG signal processing. The model is pretrained on a massive 1.7TB EEG dataset from the TUH Corpus, enabling it to learn universal representations of EEG signals that can be adapted to various downstream tasks. EEGFormer uses a vector-quantized Transformer architecture to generate discrete representations, which allows for interpretable outcomes by identifying useful patterns in the data. The model is evaluated on multiple downstream tasks, including seizure detection, abnormal detection, and emotion recognition, demonstrating its effectiveness in both in-dataset and transfer settings. The model's performance is further enhanced through fine-tuning, and it shows strong transferability in detecting anomalies and providing interpretable results. EEGFormer's approach addresses the limitations of existing methods by utilizing large-scale unlabeled data and providing interpretable representations through discrete codebook learning. The model's architecture includes preprocessing, slicing, encoding, decoding, and training components, with a focus on temporal patterns in multi-channel EEG data. The results show that EEGFormer outperforms several baselines, including EEGNet, TCN, and EEG-GNN, in terms of detection accuracy and interpretability. The model's ability to provide interpretable results is demonstrated through analysis of the learned codebook and its application in seizure localization. Overall, EEGFormer represents a significant advancement in EEG signal processing, offering a transferable and interpretable foundation model for various healthcare applications.