AutoTimes: Autoregressive Time Series Forecasters via Large Language Models

AutoTimes: Autoregressive Time Series Forecasters via Large Language Models

31 Oct 2024 | Yong Liu, Guo Qin, Xiangdong Huang, Jianmin Wang, Mingsheng Long
The paper "AutoTimes: Autoregressive Time Series Forecasters via Large Language Models" by Yong Liu, Guo Qin, Xiangdong Huang, Jianmin Wang, and Mingsheng Long proposes a novel approach to leverage large language models (LLMs) for time series forecasting. The authors address the limitations of existing methods that often fail to fully utilize the autoregressive property and decoder-only architecture of LLMs, which are inherent to time series forecasting. By repurposing LLMs as autoregressive time series forecasters, AutoTimes projects time series data into the embedding space of language tokens and generates future predictions with arbitrary lengths. The method is compatible with any decoder-only LLM and exhibits flexibility in lookback length and scalability with larger models. Additionally, AutoTimes introduces in-context forecasting, extending the context for prediction beyond the lookback window by using relevant time series prompts. Empirical results show that AutoTimes achieves state-of-the-art performance with minimal trainable parameters and significant speedup compared to advanced LLM-based forecasters. The method also demonstrates zero-shot generalization and in-context learning capabilities, making it a versatile and efficient solution for time series forecasting.The paper "AutoTimes: Autoregressive Time Series Forecasters via Large Language Models" by Yong Liu, Guo Qin, Xiangdong Huang, Jianmin Wang, and Mingsheng Long proposes a novel approach to leverage large language models (LLMs) for time series forecasting. The authors address the limitations of existing methods that often fail to fully utilize the autoregressive property and decoder-only architecture of LLMs, which are inherent to time series forecasting. By repurposing LLMs as autoregressive time series forecasters, AutoTimes projects time series data into the embedding space of language tokens and generates future predictions with arbitrary lengths. The method is compatible with any decoder-only LLM and exhibits flexibility in lookback length and scalability with larger models. Additionally, AutoTimes introduces in-context forecasting, extending the context for prediction beyond the lookback window by using relevant time series prompts. Empirical results show that AutoTimes achieves state-of-the-art performance with minimal trainable parameters and significant speedup compared to advanced LLM-based forecasters. The method also demonstrates zero-shot generalization and in-context learning capabilities, making it a versatile and efficient solution for time series forecasting.
Reach us at info@study.space
[slides and audio] AutoTimes%3A Autoregressive Time Series Forecasters via Large Language Models