This paper explores the performance and preferences of large language models (LLMs) in time series forecasting under zero-shot settings. The study compares LLMs with traditional forecasting methods and reveals that LLMs perform well on time series with clear trends and seasonal patterns but struggle with datasets lacking periodicity. The research highlights that LLMs are sensitive to the segments of input sequences closest to the target output and can recognize underlying periodic patterns in datasets. The study also proposes two techniques to improve LLM performance: incorporating external human knowledge into input prompts and converting numerical sequences into natural language formats. These methods enhance the model's ability to understand and reason about time series data, leading to improved forecasting accuracy. The findings suggest that LLMs can effectively forecast time series with strong seasonal or trend components, as they can identify and utilize these patterns. The study also investigates the impact of different input strategies on model performance and demonstrates that LLMs benefit from additional contextual information and natural language representations of time series data. Overall, the research provides insights into the strengths and limitations of LLMs in time series forecasting and offers practical strategies to enhance their performance.This paper explores the performance and preferences of large language models (LLMs) in time series forecasting under zero-shot settings. The study compares LLMs with traditional forecasting methods and reveals that LLMs perform well on time series with clear trends and seasonal patterns but struggle with datasets lacking periodicity. The research highlights that LLMs are sensitive to the segments of input sequences closest to the target output and can recognize underlying periodic patterns in datasets. The study also proposes two techniques to improve LLM performance: incorporating external human knowledge into input prompts and converting numerical sequences into natural language formats. These methods enhance the model's ability to understand and reason about time series data, leading to improved forecasting accuracy. The findings suggest that LLMs can effectively forecast time series with strong seasonal or trend components, as they can identify and utilize these patterns. The study also investigates the impact of different input strategies on model performance and demonstrates that LLMs benefit from additional contextual information and natural language representations of time series data. Overall, the research provides insights into the strengths and limitations of LLMs in time series forecasting and offers practical strategies to enhance their performance.