January 29-February 2, 2024, Sydney, NSW, Australia | Irene Hou, Sophia Mettille, Zhuo Li, Owen Man, Cynthia Zastudil, Stephen MacNeil
This paper investigates the impact of generative AI tools, such as ChatGPT, on computing students' help-seeking preferences and experiences. The study combines survey data (n=47) and interviews (n=8) to explore how students use these tools and compare them with traditional help resources. Key findings include:
1. **Usage Patterns**: Students frequently use online resources (70.2%) and ChatGPT (23.4%) for help, with a bimodal distribution of ChatGPT usage (26.1% daily vs. 34.0% never). Internet resources are the most preferred for learning new concepts, followed by instructors, while ChatGPT is valued for its iterative nature and reduced social pressures.
2. **Task-Specific Preferences**: For writing code, students value ChatGPT's creative potential and ability to support high-level tasks. For debugging, internet sources are preferred, followed by ChatGPT and TAs. Developing test cases is less commonly done with ChatGPT, relying more on instructor guidelines.
3. **Perceived Quality and Trust**: Students perceive ChatGPT as more convenient but less trustworthy and of lower quality compared to traditional resources. Trust issues are particularly prominent among first-year students and novices, who are more likely to distrust ChatGPT due to concerns about accuracy and potential academic dishonesty.
4. **Social Dynamics**: Students find it comfortable to seek help from ChatGPT, avoiding social pressures and maintaining peer relationships. However, they still value human interaction and community support.
5. **Iteration and Reformulation**: The iterative capabilities of ChatGPT are valued by 44.7% of students, who find it useful for clarifying concepts and following up on questions. However, effective help requests are a common challenge, and students need to reformulate their queries to get the most useful responses.
The study highlights that while generative AI tools offer significant benefits, their effectiveness depends on students' ability to use them effectively. Instructors and educators should provide guidance to help students maximize the utility of these tools, especially in formulating clear and precise help requests. Future research should explore broader and more diverse samples to extend these findings.This paper investigates the impact of generative AI tools, such as ChatGPT, on computing students' help-seeking preferences and experiences. The study combines survey data (n=47) and interviews (n=8) to explore how students use these tools and compare them with traditional help resources. Key findings include:
1. **Usage Patterns**: Students frequently use online resources (70.2%) and ChatGPT (23.4%) for help, with a bimodal distribution of ChatGPT usage (26.1% daily vs. 34.0% never). Internet resources are the most preferred for learning new concepts, followed by instructors, while ChatGPT is valued for its iterative nature and reduced social pressures.
2. **Task-Specific Preferences**: For writing code, students value ChatGPT's creative potential and ability to support high-level tasks. For debugging, internet sources are preferred, followed by ChatGPT and TAs. Developing test cases is less commonly done with ChatGPT, relying more on instructor guidelines.
3. **Perceived Quality and Trust**: Students perceive ChatGPT as more convenient but less trustworthy and of lower quality compared to traditional resources. Trust issues are particularly prominent among first-year students and novices, who are more likely to distrust ChatGPT due to concerns about accuracy and potential academic dishonesty.
4. **Social Dynamics**: Students find it comfortable to seek help from ChatGPT, avoiding social pressures and maintaining peer relationships. However, they still value human interaction and community support.
5. **Iteration and Reformulation**: The iterative capabilities of ChatGPT are valued by 44.7% of students, who find it useful for clarifying concepts and following up on questions. However, effective help requests are a common challenge, and students need to reformulate their queries to get the most useful responses.
The study highlights that while generative AI tools offer significant benefits, their effectiveness depends on students' ability to use them effectively. Instructors and educators should provide guidance to help students maximize the utility of these tools, especially in formulating clear and precise help requests. Future research should explore broader and more diverse samples to extend these findings.