EEG2Rep: Enhancing Self-supervised EEG Representation Through Informative Masked Inputs

EEG2Rep: Enhancing Self-supervised EEG Representation Through Informative Masked Inputs

August 25–29, 2024, Barcelona, Spain | Navid Mohammadi Foumani, Geoffrey Mackellar, Soheila Ghane, Saad Irtza, Nam Nguyen, Mahsa Salehi
EEG2Rep is a self-supervised approach designed to enhance the representation learning of electroencephalography (EEG) data, addressing three key challenges: low signal-to-noise ratio, wide amplitude ranges, and the lack of explicit segmentation in continuous-valued sequences. The core components of EEG2Rep include predicting masked inputs in the latent representation space and using a semantic subsequence preserving (SSP) method to guide the model towards generating rich semantic representations. Experiments on six diverse EEG tasks with subject variability show that EEG2Rep significantly outperforms state-of-the-art methods, with preserving 50% of EEG recordings yielding the most accurate results on average. The model also demonstrates robustness to noise, making it a promising solution for EEG data representation learning.EEG2Rep is a self-supervised approach designed to enhance the representation learning of electroencephalography (EEG) data, addressing three key challenges: low signal-to-noise ratio, wide amplitude ranges, and the lack of explicit segmentation in continuous-valued sequences. The core components of EEG2Rep include predicting masked inputs in the latent representation space and using a semantic subsequence preserving (SSP) method to guide the model towards generating rich semantic representations. Experiments on six diverse EEG tasks with subject variability show that EEG2Rep significantly outperforms state-of-the-art methods, with preserving 50% of EEG recordings yielding the most accurate results on average. The model also demonstrates robustness to noise, making it a promising solution for EEG data representation learning.
Reach us at info@study.space