How Beginning Programmers and Code LLMs (Mis)read Each Other

How Beginning Programmers and Code LLMs (Mis)read Each Other

May 11-16, 2024 | Sydney Nguyen, Hannah McLean Babe, Yangtian Zi, Arjun Guha, Carolyn Jane Anderson, Molly Q Feldman
This paper presents a large-scale study of how beginning coders interact with Code LLMs. The study involved 120 students from three universities who were asked to describe programming problems in natural language and then edit their prompts based on the generated code. The study reveals that beginning programmers struggle with writing and editing prompts, even for problems at their skill level. The results show that students have difficulty understanding the generated code and adapting their descriptions to the model. The study also highlights the importance of teaching technical communication and code understanding in education. The findings have implications for the future of computing education, as they suggest that the full natural language-to-code task may be very challenging for true novices. The study also shows that students' success in prompting Code LLMs is influenced by factors such as prior programming experience, first-generation status, and demographics. The study uses a mixed-methods approach to analyze the data, including qualitative and quantitative measures of success. The results indicate that students' success in prompting Code LLMs is not high, and that the task is mentally demanding. The study also shows that students' success is influenced by their prior knowledge and experience with programming. The findings suggest that Code LLMs may not be a viable replacement for traditional programming education, as they require students to have a certain level of technical knowledge and understanding. The study also highlights the importance of designing tasks that are appropriate for students' skill levels and that allow for iterative refinement of prompts. The study's results have implications for the design of AI-assisted programming systems and for the future of computing education.This paper presents a large-scale study of how beginning coders interact with Code LLMs. The study involved 120 students from three universities who were asked to describe programming problems in natural language and then edit their prompts based on the generated code. The study reveals that beginning programmers struggle with writing and editing prompts, even for problems at their skill level. The results show that students have difficulty understanding the generated code and adapting their descriptions to the model. The study also highlights the importance of teaching technical communication and code understanding in education. The findings have implications for the future of computing education, as they suggest that the full natural language-to-code task may be very challenging for true novices. The study also shows that students' success in prompting Code LLMs is influenced by factors such as prior programming experience, first-generation status, and demographics. The study uses a mixed-methods approach to analyze the data, including qualitative and quantitative measures of success. The results indicate that students' success in prompting Code LLMs is not high, and that the task is mentally demanding. The study also shows that students' success is influenced by their prior knowledge and experience with programming. The findings suggest that Code LLMs may not be a viable replacement for traditional programming education, as they require students to have a certain level of technical knowledge and understanding. The study also highlights the importance of designing tasks that are appropriate for students' skill levels and that allow for iterative refinement of prompts. The study's results have implications for the design of AI-assisted programming systems and for the future of computing education.
Reach us at info@study.space
[slides] How Beginning Programmers and Code LLMs (Mis)read Each Other | StudySpace