Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification

Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification

19 Oct 2024 | Yihe Wang*, Nan Huang*, Taida Li*, Yujun Yan, Xiang Zhang
Medformer is a multi-granularity patching transformer designed for medical time-series (MedTS) classification. The paper introduces three novel mechanisms to effectively leverage the unique characteristics of MedTS: cross-channel patching to capture inter-channel correlations, multi-granularity embedding to capture features at different scales, and a two-stage (intra- and inter-granularity) multi-granularity self-attention mechanism to learn features and correlations within and across granularities. The model is evaluated on five public datasets, including three EEG and two ECG datasets, under both subject-dependent and subject-independent setups. Results show that Medformer outperforms 10 baselines, achieving top averaged rankings across five datasets on all six evaluation metrics. These results highlight the effectiveness of Medformer in healthcare applications, such as diagnosing Myocardial Infarction, Alzheimer’s, and Parkinson’s disease. The model is released with source code at <https://github.com/DL4mHealth/Medformer>. The paper also discusses related work, including existing methods for MedTS classification and time series transformers, and presents the method's architecture, cross-channel multi-granularity patch embedding, and two-stage multi-granularity self-attention. The experiments demonstrate that Medformer achieves high accuracy and robustness in MedTS classification, with particular emphasis on its performance under the subject-independent setup, which is more challenging and closer to real-world applications. The paper concludes that Medformer is a promising approach for MedTS classification, with potential for real-world applications. However, the paper also acknowledges limitations, such as the need for careful tuning of patch lengths and the lack of specific mechanisms for the subject-independent setup. Future work could explore automatic selection of patch lengths and decomposition of subject-specific features to enhance learning under the subject-independent setup.Medformer is a multi-granularity patching transformer designed for medical time-series (MedTS) classification. The paper introduces three novel mechanisms to effectively leverage the unique characteristics of MedTS: cross-channel patching to capture inter-channel correlations, multi-granularity embedding to capture features at different scales, and a two-stage (intra- and inter-granularity) multi-granularity self-attention mechanism to learn features and correlations within and across granularities. The model is evaluated on five public datasets, including three EEG and two ECG datasets, under both subject-dependent and subject-independent setups. Results show that Medformer outperforms 10 baselines, achieving top averaged rankings across five datasets on all six evaluation metrics. These results highlight the effectiveness of Medformer in healthcare applications, such as diagnosing Myocardial Infarction, Alzheimer’s, and Parkinson’s disease. The model is released with source code at <https://github.com/DL4mHealth/Medformer>. The paper also discusses related work, including existing methods for MedTS classification and time series transformers, and presents the method's architecture, cross-channel multi-granularity patch embedding, and two-stage multi-granularity self-attention. The experiments demonstrate that Medformer achieves high accuracy and robustness in MedTS classification, with particular emphasis on its performance under the subject-independent setup, which is more challenging and closer to real-world applications. The paper concludes that Medformer is a promising approach for MedTS classification, with potential for real-world applications. However, the paper also acknowledges limitations, such as the need for careful tuning of patch lengths and the lack of specific mechanisms for the subject-independent setup. Future work could explore automatic selection of patch lengths and decomposition of subject-specific features to enhance learning under the subject-independent setup.
Reach us at info@study.space
[slides] Medformer%3A A Multi-Granularity Patching Transformer for Medical Time-Series Classification | StudySpace