25 Feb 2024 | Haoxin Liu, Zhiyuan Zhao, Jindong Wang, Harshavardhan Kamarthi, B. Aditya Prakash
LSTPrompt is a novel approach for prompting large language models (LLMs) in zero-shot time series forecasting (TSF) tasks. It decomposes TSF into short-term and long-term forecasting subtasks, tailoring prompts to each. LSTPrompt guides LLMs to regularly reassess forecasting mechanisms to enhance adaptability. The method introduces TimeDecomp, which breaks down TSF into sequential subtasks, and TimeBreath, which encourages periodic reassessment of forecasting mechanisms. Evaluations on multiple benchmark and concurrent datasets show that LSTPrompt consistently outperforms existing prompting methods and achieves competitive results compared to foundation TSF models. The method is effective for zero-shot TSF tasks and demonstrates strong generalization ability. LSTPrompt is designed for any TS datasets and can be easily tailored to different scenarios by adjusting a single hyperparameter, k. The results highlight the benefits of LSTPrompt, including its superior performance on benchmark datasets and its ability to outperform best supervised results in certain scenarios. The method is effective for zero-shot TSF tasks and demonstrates strong generalization ability. LSTPrompt is designed for any TS datasets and can be easily tailored to different scenarios by adjusting a single hyperparameter, k. The results highlight the benefits of LSTPrompt, including its superior performance on benchmark datasets and its ability to outperform best supervised results in certain scenarios.LSTPrompt is a novel approach for prompting large language models (LLMs) in zero-shot time series forecasting (TSF) tasks. It decomposes TSF into short-term and long-term forecasting subtasks, tailoring prompts to each. LSTPrompt guides LLMs to regularly reassess forecasting mechanisms to enhance adaptability. The method introduces TimeDecomp, which breaks down TSF into sequential subtasks, and TimeBreath, which encourages periodic reassessment of forecasting mechanisms. Evaluations on multiple benchmark and concurrent datasets show that LSTPrompt consistently outperforms existing prompting methods and achieves competitive results compared to foundation TSF models. The method is effective for zero-shot TSF tasks and demonstrates strong generalization ability. LSTPrompt is designed for any TS datasets and can be easily tailored to different scenarios by adjusting a single hyperparameter, k. The results highlight the benefits of LSTPrompt, including its superior performance on benchmark datasets and its ability to outperform best supervised results in certain scenarios. The method is effective for zero-shot TSF tasks and demonstrates strong generalization ability. LSTPrompt is designed for any TS datasets and can be easily tailored to different scenarios by adjusting a single hyperparameter, k. The results highlight the benefits of LSTPrompt, including its superior performance on benchmark datasets and its ability to outperform best supervised results in certain scenarios.