Timer: Generative Pre-trained Transformers Are Large Time Series Models

Timer: Generative Pre-trained Transformers Are Large Time Series Models

4 Jun 2024 | Yong Liu * 1 Haoran Zhang * 1 Chenyu Li * 1 Xiangdong Huang 1 Jianmin Wang 1 Mingsheng Long 1
This paper addresses the challenges of time series analysis in data-scarce scenarios, where deep models often struggle despite their performance on current benchmarks. To overcome these limitations, the authors propose the development of *large time series models* (LTSMs) through large-scale pre-training. They curate a Unified Time Series Dataset (UTSD) with up to 1 billion time points, unify heterogeneous time series into a *single-series sequence* (S3) format, and develop a GPT-style architecture for LTSMs. The Time Series Transformer (Timer) is trained using generative pre-training on next token prediction and adapted for various downstream tasks such as forecasting, imputation, and anomaly detection. Timer demonstrates promising capabilities as an LTSM, outperforming state-of-the-art task-specific models in few-shot scenarios. The paper also evaluates Timer's scalability and zero-shot forecasting capabilities, providing valuable insights for future research and practical applications.This paper addresses the challenges of time series analysis in data-scarce scenarios, where deep models often struggle despite their performance on current benchmarks. To overcome these limitations, the authors propose the development of *large time series models* (LTSMs) through large-scale pre-training. They curate a Unified Time Series Dataset (UTSD) with up to 1 billion time points, unify heterogeneous time series into a *single-series sequence* (S3) format, and develop a GPT-style architecture for LTSMs. The Time Series Transformer (Timer) is trained using generative pre-training on next token prediction and adapted for various downstream tasks such as forecasting, imputation, and anomaly detection. Timer demonstrates promising capabilities as an LTSM, outperforming state-of-the-art task-specific models in few-shot scenarios. The paper also evaluates Timer's scalability and zero-shot forecasting capabilities, providing valuable insights for future research and practical applications.
Reach us at info@study.space