The Impact of Large Language Models on Programming Education and Student Learning Outcomes

The Impact of Large Language Models on Programming Education and Student Learning Outcomes

13 May 2024 | Gregor Jošt, Viktor Taneski, and Sašo Karakatić
This study investigates the impact of informal usage of Large Language Models (LLMs) like ChatGPT and Copilot on the learning outcomes of undergraduate students in programming education, focusing on React applications. Thirty-two second-year students participated in a ten-week experiment where they completed programming assignments. The study examined the correlation between LLM usage and student performance, with a particular focus on code generation, debugging, and seeking additional explanations. The results revealed a significant negative correlation between increased reliance on LLMs for critical thinking-intensive tasks such as code generation and debugging and lower final grades. However, the correlation between LLM use for seeking additional explanations and final grades was not as strong, suggesting that LLMs may serve better as a supplementary learning tool. These findings highlight the importance of balancing LLM integration with the cultivation of independent problem-solving skills in programming education. The study also emphasizes the need for careful consideration of how LLMs are integrated into educational settings to ensure they support rather than hinder the development of essential programming competencies. The results indicate that while LLMs can enhance learning, their overuse may negatively affect students' ability to independently solve programming tasks. The study concludes that a balanced approach to integrating LLMs into programming education is essential to maximize their benefits while fostering self-sufficiency in problem-solving.This study investigates the impact of informal usage of Large Language Models (LLMs) like ChatGPT and Copilot on the learning outcomes of undergraduate students in programming education, focusing on React applications. Thirty-two second-year students participated in a ten-week experiment where they completed programming assignments. The study examined the correlation between LLM usage and student performance, with a particular focus on code generation, debugging, and seeking additional explanations. The results revealed a significant negative correlation between increased reliance on LLMs for critical thinking-intensive tasks such as code generation and debugging and lower final grades. However, the correlation between LLM use for seeking additional explanations and final grades was not as strong, suggesting that LLMs may serve better as a supplementary learning tool. These findings highlight the importance of balancing LLM integration with the cultivation of independent problem-solving skills in programming education. The study also emphasizes the need for careful consideration of how LLMs are integrated into educational settings to ensure they support rather than hinder the development of essential programming competencies. The results indicate that while LLMs can enhance learning, their overuse may negatively affect students' ability to independently solve programming tasks. The study concludes that a balanced approach to integrating LLMs into programming education is essential to maximize their benefits while fostering self-sufficiency in problem-solving.
Reach us at info@study.space