6 May 2024 | Xiyuan Zhang, Ranak Roy Chowdhury, Rajesh K. Gupta and Jingbo Shang
This survey explores the application of Large Language Models (LLMs) in time series analysis, addressing the challenge of adapting LLMs, originally trained on textual data, to numerical time series. The paper presents a comprehensive taxonomy of five methodologies: (1) direct prompting, (2) time series quantization, (3) aligning, (4) using vision as a bridge, and (5) tool integration. Each method is discussed in terms of its approach, advantages, and limitations. The survey also provides an overview of existing multimodal datasets and highlights challenges and future directions in this emerging field. Key findings include the effectiveness of prompting and quantization methods for zero-shot time series tasks, the importance of aligning time series with language models for semantic understanding, and the potential of vision-based approaches to bridge the modality gap. The paper emphasizes the need for efficient algorithms, domain knowledge integration, and privacy considerations in large-scale time series analysis. It concludes with a call for further research to enhance the synergy between LLMs and time series analysis, particularly in multimodal and multitask scenarios.This survey explores the application of Large Language Models (LLMs) in time series analysis, addressing the challenge of adapting LLMs, originally trained on textual data, to numerical time series. The paper presents a comprehensive taxonomy of five methodologies: (1) direct prompting, (2) time series quantization, (3) aligning, (4) using vision as a bridge, and (5) tool integration. Each method is discussed in terms of its approach, advantages, and limitations. The survey also provides an overview of existing multimodal datasets and highlights challenges and future directions in this emerging field. Key findings include the effectiveness of prompting and quantization methods for zero-shot time series tasks, the importance of aligning time series with language models for semantic understanding, and the potential of vision-based approaches to bridge the modality gap. The paper emphasizes the need for efficient algorithms, domain knowledge integration, and privacy considerations in large-scale time series analysis. It concludes with a call for further research to enhance the synergy between LLMs and time series analysis, particularly in multimodal and multitask scenarios.