PARAMETRIC AUGMENTATION FOR TIME SERIES CONTRASTIVE LEARNING

PARAMETRIC AUGMENTATION FOR TIME SERIES CONTRASTIVE LEARNING

16 Feb 2024 | Xu Zheng, Tianchun Wang, Wei Cheng, Aitian Ma, Haifeng Chen, Mo Sha, Dongsheng Luo
The article introduces AutoTCL, a novel contrastive learning framework for time series data that employs parametric augmentation to improve representation learning. Traditional methods for time series contrastive learning often rely on domain-specific data augmentation techniques, which can be ineffective due to the complexity and variability of time series data. AutoTCL addresses this by using information theory to define good data augmentations, which preserve semantics and introduce sufficient variance. The framework factorizes time series data into informative and task-irrelevant components, then applies a parametric neural network to generate augmented views that maintain the original semantics while enhancing diversity. This approach is encoder-agnostic, allowing it to be integrated with various backbone encoders. Experiments on univariate and multivariate time series forecasting tasks show that AutoTCL achieves significant improvements over existing methods, with a 6.5% reduction in MSE and 4.7% in MAE for univariate forecasting, and a 2.9% reduction in MSE and 1.2% in MAE for multivariate forecasting. In classification tasks, AutoTCL improves average accuracy by 1.2%. The framework also demonstrates strong performance in comparison to other state-of-the-art methods, including TS2Vec, Informer, and InfoTS. The proposed method is effective in learning robust and discriminative representations for time series data, making it a promising approach for a wide range of applications.The article introduces AutoTCL, a novel contrastive learning framework for time series data that employs parametric augmentation to improve representation learning. Traditional methods for time series contrastive learning often rely on domain-specific data augmentation techniques, which can be ineffective due to the complexity and variability of time series data. AutoTCL addresses this by using information theory to define good data augmentations, which preserve semantics and introduce sufficient variance. The framework factorizes time series data into informative and task-irrelevant components, then applies a parametric neural network to generate augmented views that maintain the original semantics while enhancing diversity. This approach is encoder-agnostic, allowing it to be integrated with various backbone encoders. Experiments on univariate and multivariate time series forecasting tasks show that AutoTCL achieves significant improvements over existing methods, with a 6.5% reduction in MSE and 4.7% in MAE for univariate forecasting, and a 2.9% reduction in MSE and 1.2% in MAE for multivariate forecasting. In classification tasks, AutoTCL improves average accuracy by 1.2%. The framework also demonstrates strong performance in comparison to other state-of-the-art methods, including TS2Vec, Informer, and InfoTS. The proposed method is effective in learning robust and discriminative representations for time series data, making it a promising approach for a wide range of applications.
Reach us at info@study.space