Large Language Models for Education: A Survey and Outlook

Large Language Models for Education: A Survey and Outlook

1 Apr 2024 | Shen Wang1*, Tianlong Xu1*, Hang Li2*, Chaoli Zhang3, Joleen Liang4, Jiliang Tang2, Philip S. Yu5, Qingsong Wen1†
This survey paper provides a comprehensive overview of the application of large language models (LLMs) in education, covering various aspects such as student and teacher assistance, adaptive learning, and commercial tools. The authors systematically review technological advancements, organize related datasets and benchmarks, and identify risks and challenges associated with LLMs in education. They also outline future research opportunities, emphasizing the potential for LLMs to revolutionize educational practices and foster personalized learning environments. The paper highlights the benefits of LLMs in areas like question-solving, error correction, and content creation, while also addressing concerns such as bias, reliability, transparency, and overreliance on LLMs. The authors propose several future directions, including pedagogical interest-aligned LLMs, multi-agent education systems, multimodal and multilingual support, edge computing, efficient training of specialized models, and ethical and privacy considerations. The survey aims to provide a technological perspective on LLMs in education, offering a taxonomy and insights into current challenges and potential advancements.This survey paper provides a comprehensive overview of the application of large language models (LLMs) in education, covering various aspects such as student and teacher assistance, adaptive learning, and commercial tools. The authors systematically review technological advancements, organize related datasets and benchmarks, and identify risks and challenges associated with LLMs in education. They also outline future research opportunities, emphasizing the potential for LLMs to revolutionize educational practices and foster personalized learning environments. The paper highlights the benefits of LLMs in areas like question-solving, error correction, and content creation, while also addressing concerns such as bias, reliability, transparency, and overreliance on LLMs. The authors propose several future directions, including pedagogical interest-aligned LLMs, multi-agent education systems, multimodal and multilingual support, edge computing, efficient training of specialized models, and ethical and privacy considerations. The survey aims to provide a technological perspective on LLMs in education, offering a taxonomy and insights into current challenges and potential advancements.
Reach us at info@study.space