May 11–16, 2024, Honolulu, HI, USA | Minju Park, Sojung Kim, Seunghyun Lee, Soonwoo Kwon, Kyuseok Kim
This paper discusses the design and implementation of a personalized tutoring system that leverages Large Language Models (LLMs) for conversation-based tutoring. The system aims to address the challenges of accurately assessing students and incorporating this assessment into teaching within the conversation. Key components of the system include:
1. **Student Modeling with Diagnostic Components**: The system assesses students based on cognitive state, affective state, and learning style to tailor instructional strategies.
2. **Conversation-Based Tutoring with LLMs**: The tutoring session is conducted using LLMs, incorporating prompt engineering to integrate student assessment outcomes and various instructional strategies.
The authors developed a proof-of-concept tutoring system focused on personalization and tested it with 20 participants. The system teaches three English writing concepts: "Pronouns," "Punctuation," and "Transitions" from the SAT Writing test. The user flow includes an onboarding survey, pre-test, tutoring session, and post-test for each concept. The system's ability to implement personalization was evaluated through interactions with actual learners, providing insights into potential improvements and challenges.
The paper highlights the importance of integrating extensive student modeling and designing user-adaptive prompts to enhance the adaptability and personalization of the learning process. The results show that the system successfully adjusts instructional strategies based on student assessments, but also identifies limitations such as the need for more engaging content and better alignment between post-test questions and tutoring content.
The authors plan to further refine the system and investigate its educational advantages, particularly in terms of learning gain and engagement.This paper discusses the design and implementation of a personalized tutoring system that leverages Large Language Models (LLMs) for conversation-based tutoring. The system aims to address the challenges of accurately assessing students and incorporating this assessment into teaching within the conversation. Key components of the system include:
1. **Student Modeling with Diagnostic Components**: The system assesses students based on cognitive state, affective state, and learning style to tailor instructional strategies.
2. **Conversation-Based Tutoring with LLMs**: The tutoring session is conducted using LLMs, incorporating prompt engineering to integrate student assessment outcomes and various instructional strategies.
The authors developed a proof-of-concept tutoring system focused on personalization and tested it with 20 participants. The system teaches three English writing concepts: "Pronouns," "Punctuation," and "Transitions" from the SAT Writing test. The user flow includes an onboarding survey, pre-test, tutoring session, and post-test for each concept. The system's ability to implement personalization was evaluated through interactions with actual learners, providing insights into potential improvements and challenges.
The paper highlights the importance of integrating extensive student modeling and designing user-adaptive prompts to enhance the adaptability and personalization of the learning process. The results show that the system successfully adjusts instructional strategies based on student assessments, but also identifies limitations such as the need for more engaging content and better alignment between post-test questions and tutoring content.
The authors plan to further refine the system and investigate its educational advantages, particularly in terms of learning gain and engagement.