18 June 2015 | Nima Bigdely-Shamlo, Tim Mullen, Christian Kothe, Kyung-Min Su and Kay A. Robbins
The PREP pipeline is a standardized, fully automated preprocessing pipeline for large-scale EEG analysis. It aims to detect and remove experimentally generated artifacts, such as electrical interference and subject-generated artifacts like eye blinks or muscle activity. The pipeline includes an automated report for each dataset processed, and users can download the PREP pipeline as a freely available MATLAB library. The pipeline consists of three main steps: initial clean-up, robust referencing, and interpolation of bad channels. The output is an EEG data structure in EEGLAB format along with auxiliary files for events, channels, and metadata. The pipeline is designed to be compatible with MATLAB, R, Python, Java, and C. The PREP pipeline also provides detailed summary information and visualizations to help researchers identify unusual features in the data. The pipeline uses a multi-stage robust referencing scheme to deal with the interaction between noisy channels and the reference. The pipeline also includes functions for detecting noisy or outlier channels using four main criteria: deviation, correlation, predictability, and noisiness. The pipeline is designed to be robust and efficient, with computational considerations for parallel processing. The pipeline has been tested on various datasets, including those from the Army Research Laboratory, National Chiao Tung University, and Physionet. The pipeline produces a PDF report detailing the results of referencing and bad channel detection, which allows users to quickly identify issues in the dataset. The pipeline also includes functions for extracting statistics from an entire collection, visualization functions, and issue reports listing datasets that are likely to be problematic after referencing. The pipeline is designed to be compatible with various tools and formats, including HDF5 files. The pipeline has been shown to be effective in removing line noise and improving the signal-to-noise ratio, while preserving the integrity of the data. The pipeline is also designed to be flexible, allowing users to customize the preprocessing steps according to their specific needs. The pipeline has been applied to a variety of datasets, including those from the KaggleBCI competition, and has been shown to produce reliable results. The pipeline is an important tool for large-scale EEG analysis, providing a standardized and automated approach to preprocessing that can help researchers to analyze data more efficiently and effectively.The PREP pipeline is a standardized, fully automated preprocessing pipeline for large-scale EEG analysis. It aims to detect and remove experimentally generated artifacts, such as electrical interference and subject-generated artifacts like eye blinks or muscle activity. The pipeline includes an automated report for each dataset processed, and users can download the PREP pipeline as a freely available MATLAB library. The pipeline consists of three main steps: initial clean-up, robust referencing, and interpolation of bad channels. The output is an EEG data structure in EEGLAB format along with auxiliary files for events, channels, and metadata. The pipeline is designed to be compatible with MATLAB, R, Python, Java, and C. The PREP pipeline also provides detailed summary information and visualizations to help researchers identify unusual features in the data. The pipeline uses a multi-stage robust referencing scheme to deal with the interaction between noisy channels and the reference. The pipeline also includes functions for detecting noisy or outlier channels using four main criteria: deviation, correlation, predictability, and noisiness. The pipeline is designed to be robust and efficient, with computational considerations for parallel processing. The pipeline has been tested on various datasets, including those from the Army Research Laboratory, National Chiao Tung University, and Physionet. The pipeline produces a PDF report detailing the results of referencing and bad channel detection, which allows users to quickly identify issues in the dataset. The pipeline also includes functions for extracting statistics from an entire collection, visualization functions, and issue reports listing datasets that are likely to be problematic after referencing. The pipeline is designed to be compatible with various tools and formats, including HDF5 files. The pipeline has been shown to be effective in removing line noise and improving the signal-to-noise ratio, while preserving the integrity of the data. The pipeline is also designed to be flexible, allowing users to customize the preprocessing steps according to their specific needs. The pipeline has been applied to a variety of datasets, including those from the KaggleBCI competition, and has been shown to produce reliable results. The pipeline is an important tool for large-scale EEG analysis, providing a standardized and automated approach to preprocessing that can help researchers to analyze data more efficiently and effectively.