5 Mar 2024 | Hasan A. Rasheed, Christian Weber, Madjid Fathi
This paper explores the use of knowledge graphs (KGs) as a source of contextual information to enhance the precision and reliability of explanations generated by large language models (LLMs) for learning recommendations. The authors address the limitations of LLMs in providing precise and comprehensible explanations, particularly in sensitive educational contexts. By integrating domain experts into the prompt engineering process, the study aims to ensure that the explanations are relevant and accurate.
The proposed approach involves extracting semantic relations and metadata from KGs to enrich the LLM's prompt, guiding it to generate more accurate and relevant explanations. The GPT-4 model is used to generate explanations, with the context provided by the KG and expert input. The explanations are evaluated using Rouge-N and Rouge-L measures, as well as through user feedback from learners and domain experts.
The results show that the proposed method significantly improves the precision and recall of the generated explanations compared to those produced by GPT-4 alone, reducing the risk of generating imprecise or irrelevant information. The qualitative evaluation also highlights the enhanced acceptance of the explanations and the importance of expert input in ensuring pedagogical quality.
The paper concludes by discussing the limitations of the approach, such as the need for larger sample sizes and further research on user-specific data, and outlines future directions for improving the effectiveness of LLM-based explanations in educational settings.This paper explores the use of knowledge graphs (KGs) as a source of contextual information to enhance the precision and reliability of explanations generated by large language models (LLMs) for learning recommendations. The authors address the limitations of LLMs in providing precise and comprehensible explanations, particularly in sensitive educational contexts. By integrating domain experts into the prompt engineering process, the study aims to ensure that the explanations are relevant and accurate.
The proposed approach involves extracting semantic relations and metadata from KGs to enrich the LLM's prompt, guiding it to generate more accurate and relevant explanations. The GPT-4 model is used to generate explanations, with the context provided by the KG and expert input. The explanations are evaluated using Rouge-N and Rouge-L measures, as well as through user feedback from learners and domain experts.
The results show that the proposed method significantly improves the precision and recall of the generated explanations compared to those produced by GPT-4 alone, reducing the risk of generating imprecise or irrelevant information. The qualitative evaluation also highlights the enhanced acceptance of the explanations and the importance of expert input in ensuring pedagogical quality.
The paper concludes by discussing the limitations of the approach, such as the need for larger sample sizes and further research on user-specific data, and outlines future directions for improving the effectiveness of LLM-based explanations in educational settings.