The paper evaluates the capabilities of Large Language Models (LLMs) in generating various types of visualizations. The authors conducted a systematic analysis to assess whether LLMs can correctly generate a wide range of charts, effectively use different visualization libraries, and configure individual chart elements. They selected 24 commonly used chart types and designed prompts to test the performance of ChatGPT3 and ChatGPT4. The results show that ChatGPT4 performed significantly better, generating almost 80% of the charts correctly. The study also found that the choice of visualization library (matplotlib, Plotly, or Altair) and the level of configuration for visual variables significantly impact the output. The authors conclude that while LLMs have shown promising results, there are still limitations, such as issues with label clipping and overlapping legends, which require further fine-tuning or code editing. The paper contributes to the field by providing a comprehensive set of prompts and data sources, along with an analysis of the performance of different LLMs and libraries.The paper evaluates the capabilities of Large Language Models (LLMs) in generating various types of visualizations. The authors conducted a systematic analysis to assess whether LLMs can correctly generate a wide range of charts, effectively use different visualization libraries, and configure individual chart elements. They selected 24 commonly used chart types and designed prompts to test the performance of ChatGPT3 and ChatGPT4. The results show that ChatGPT4 performed significantly better, generating almost 80% of the charts correctly. The study also found that the choice of visualization library (matplotlib, Plotly, or Altair) and the level of configuration for visual variables significantly impact the output. The authors conclude that while LLMs have shown promising results, there are still limitations, such as issues with label clipping and overlapping legends, which require further fine-tuning or code editing. The paper contributes to the field by providing a comprehensive set of prompts and data sources, along with an analysis of the performance of different LLMs and libraries.