Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations

Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations

2024 | Hasan A. Rasheed, Christian Weber, Madjid Fathi
This paper proposes an approach to enhance the precision and reliability of explanations generated by large language models (LLMs) for learning recommendations by integrating knowledge graphs (KGs) as a source of factual context. The goal is to reduce the risk of model hallucinations and ensure that explanations are accurate and relevant to the learner's needs. The approach involves using semantic relations from KGs to provide curated knowledge about learning recommendations, and designing explanations as textual templates filled by the LLM. Domain experts are involved in the prompt engineering process to ensure that explanations include relevant information for the learner. The approach is evaluated using Rouge-N and Rouge-L measures, as well as qualitative feedback from experts and learners. The results show that explanations generated with KG-based contextualization have higher recall and precision compared to those generated solely by the GPT model, with a significantly reduced risk of generating imprecise information. The study also highlights the importance of incorporating domain expertise in the design of explanations and the need for further research to address limitations in LLM performance, such as the need for more detailed phrasing and the use of user data to enhance personalization. The approach is evaluated both quantitatively and qualitatively, demonstrating the effectiveness of integrating KGs with LLMs to improve the quality and relevance of explanations for learning recommendations.This paper proposes an approach to enhance the precision and reliability of explanations generated by large language models (LLMs) for learning recommendations by integrating knowledge graphs (KGs) as a source of factual context. The goal is to reduce the risk of model hallucinations and ensure that explanations are accurate and relevant to the learner's needs. The approach involves using semantic relations from KGs to provide curated knowledge about learning recommendations, and designing explanations as textual templates filled by the LLM. Domain experts are involved in the prompt engineering process to ensure that explanations include relevant information for the learner. The approach is evaluated using Rouge-N and Rouge-L measures, as well as qualitative feedback from experts and learners. The results show that explanations generated with KG-based contextualization have higher recall and precision compared to those generated solely by the GPT model, with a significantly reduced risk of generating imprecise information. The study also highlights the importance of incorporating domain expertise in the design of explanations and the need for further research to address limitations in LLM performance, such as the need for more detailed phrasing and the use of user data to enhance personalization. The approach is evaluated both quantitatively and qualitatively, demonstrating the effectiveness of integrating KGs with LLMs to improve the quality and relevance of explanations for learning recommendations.
Reach us at info@study.space
[slides and audio] Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations