Large language models can be zero-shot anomaly detectors for time series?

Large language models can be zero-shot anomaly detectors for time series?

12 Aug 2024 | Sarah Alnegheimish, Linh Nguyen, Laure Berti-Equille, Kalyan Veeramachaneni
Large language models (LLMs) can be used as zero-shot anomaly detectors for time series data. This study introduces SIGLLM, a framework that leverages LLMs for time series anomaly detection. The framework includes a time-series-to-text conversion module and end-to-end pipelines that prompt LLMs to detect anomalies. Two approaches are explored: PROMPTER, which directly asks LLMs to identify anomalies in input sequences, and DETECTOR, which uses LLMs' forecasting capabilities to detect anomalies by comparing the original signal with the forecasted one. The study evaluates SIGLLM on 11 datasets and 10 pipelines, showing that the forecasting method outperforms the prompting method in all datasets based on F1 score. While LLMs can detect anomalies, state-of-the-art deep learning models still perform better, achieving results 30% better than LLMs. The study highlights the potential of the DETECTOR approach, which outperforms PROMPTER by 135% in F1 score. SIGLLM is implemented with a time-series-to-text representation module and two novel methodologies for solving the task. The framework integrates proprietary models like GPT-3.5 and open-source models from HuggingFace. The study also provides a comprehensive evaluation of LLM performance, showing that LLMs can detect anomalies with an average F1 score of 0.525. The results indicate that LLMs are effective for time series anomaly detection, though they still lag behind deep learning models. The study also discusses the practicality of using LLMs for this task, noting that while they can be used in zero-shot scenarios, their performance is limited by context window size and latency. The study concludes that LLMs can be a viable alternative to traditional methods for time series anomaly detection, though further research is needed to improve their performance and reduce costs.Large language models (LLMs) can be used as zero-shot anomaly detectors for time series data. This study introduces SIGLLM, a framework that leverages LLMs for time series anomaly detection. The framework includes a time-series-to-text conversion module and end-to-end pipelines that prompt LLMs to detect anomalies. Two approaches are explored: PROMPTER, which directly asks LLMs to identify anomalies in input sequences, and DETECTOR, which uses LLMs' forecasting capabilities to detect anomalies by comparing the original signal with the forecasted one. The study evaluates SIGLLM on 11 datasets and 10 pipelines, showing that the forecasting method outperforms the prompting method in all datasets based on F1 score. While LLMs can detect anomalies, state-of-the-art deep learning models still perform better, achieving results 30% better than LLMs. The study highlights the potential of the DETECTOR approach, which outperforms PROMPTER by 135% in F1 score. SIGLLM is implemented with a time-series-to-text representation module and two novel methodologies for solving the task. The framework integrates proprietary models like GPT-3.5 and open-source models from HuggingFace. The study also provides a comprehensive evaluation of LLM performance, showing that LLMs can detect anomalies with an average F1 score of 0.525. The results indicate that LLMs are effective for time series anomaly detection, though they still lag behind deep learning models. The study also discusses the practicality of using LLMs for this task, noting that while they can be used in zero-shot scenarios, their performance is limited by context window size and latency. The study concludes that LLMs can be a viable alternative to traditional methods for time series anomaly detection, though further research is needed to improve their performance and reduce costs.
Reach us at info@study.space
[slides and audio] Large language models can be zero-shot anomaly detectors for time series%3F