8 Dec 2020 | George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, Carsten Eickhoff
This paper introduces a transformer-based framework for unsupervised representation learning of multivariate time series (MTS). The framework leverages unlabeled data to pre-train a transformer model, which can then be fine-tuned for various downstream tasks such as regression, classification, imputation, and forecasting. The authors evaluate their approach on several benchmark datasets and demonstrate that it outperforms current state-of-the-art methods, even with limited training samples. The framework is the first to show that unsupervised learning can surpass the performance of supervised learning for MTS tasks without additional unlabeled data. The paper also discusses the computational efficiency of the model and its applicability to different types of MTS data.This paper introduces a transformer-based framework for unsupervised representation learning of multivariate time series (MTS). The framework leverages unlabeled data to pre-train a transformer model, which can then be fine-tuned for various downstream tasks such as regression, classification, imputation, and forecasting. The authors evaluate their approach on several benchmark datasets and demonstrate that it outperforms current state-of-the-art methods, even with limited training samples. The framework is the first to show that unsupervised learning can surpass the performance of supervised learning for MTS tasks without additional unlabeled data. The paper also discusses the computational efficiency of the model and its applicability to different types of MTS data.