AutoTimes: Autoregressive Time Series Forecasters via Large Language Models

AutoTimes: Autoregressive Time Series Forecasters via Large Language Models

2024 | Yong Liu; Guo Qin; Xiangdong Huang; Jianmin Wang; Mingsheng Long
AutoTimes is a novel approach that leverages large language models (LLMs) for time series forecasting. The method repurposes LLMs as autoregressive time series forecasters by projecting time series into the embedding space of language tokens and autoregressively generating future predictions. AutoTimes is compatible with any decoder-only LLM and allows for flexible lookback lengths and scalability with larger models. It introduces in-context forecasting, where time series can be self-prompted to extend the context for prediction beyond the lookback window. By embedding textual timestamps, AutoTimes can utilize chronological information to align multivariate time series. Empirically, AutoTimes achieves state-of-the-art performance with 0.1% trainable parameters and over 5× training/inference speedup compared to advanced LLM-based forecasters. The method is evaluated on various time series forecasting benchmarks, including short-term and long-term forecasting, and demonstrates superior performance in both scenarios. AutoTimes also exhibits zero-shot generalizability and in-context forecasting capabilities. The method is efficient, with minimal parameters and training cost, and is compatible with a wide range of LLMs. AutoTimes is shown to be effective in capturing diverse series variations and to inherit advanced capabilities such as zero-shot and in-context forecasting. The method is also scalable, with performance improving as the size of the LLM increases. AutoTimes is a promising approach for time series forecasting, leveraging the inherent autoregressive properties of LLMs to achieve state-of-the-art results.AutoTimes is a novel approach that leverages large language models (LLMs) for time series forecasting. The method repurposes LLMs as autoregressive time series forecasters by projecting time series into the embedding space of language tokens and autoregressively generating future predictions. AutoTimes is compatible with any decoder-only LLM and allows for flexible lookback lengths and scalability with larger models. It introduces in-context forecasting, where time series can be self-prompted to extend the context for prediction beyond the lookback window. By embedding textual timestamps, AutoTimes can utilize chronological information to align multivariate time series. Empirically, AutoTimes achieves state-of-the-art performance with 0.1% trainable parameters and over 5× training/inference speedup compared to advanced LLM-based forecasters. The method is evaluated on various time series forecasting benchmarks, including short-term and long-term forecasting, and demonstrates superior performance in both scenarios. AutoTimes also exhibits zero-shot generalizability and in-context forecasting capabilities. The method is efficient, with minimal parameters and training cost, and is compatible with a wide range of LLMs. AutoTimes is shown to be effective in capturing diverse series variations and to inherit advanced capabilities such as zero-shot and in-context forecasting. The method is also scalable, with performance improving as the size of the LLM increases. AutoTimes is a promising approach for time series forecasting, leveraging the inherent autoregressive properties of LLMs to achieve state-of-the-art results.
Reach us at info@futurestudyspace.com