Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

9 Aug 2024 | Dong shu1; Tianle Chen2*, Mingyu Jin2, Chong Zhang4, Mengnan Du1, Yongfeng Zhang2
The paper introduces the Knowledge Graph Large Language Model (KG-LLM) framework, which leverages large language models (LLMs) to enhance multi-hop link prediction in knowledge graphs (KGs). The framework converts KG data into natural language prompts, allowing LLMs to learn latent representations of entities and their relationships. By fine-tuning three leading LLMs—Flan-T5, Llama2, and Gemma—the framework improves the models' generalization capabilities and accuracy in predicting multi-hop links. The study highlights the effectiveness of Chain-of-Thought (CoT) reasoning and In-Context Learning (ICL) in enhancing model performance. Experimental results on real-world datasets show that the KG-LLM framework significantly improves multi-hop link prediction and relation prediction, both with and without ICL. The framework's ability to handle unseen prompts and its potential for generative multi-hop link prediction make it a promising solution for advanced KG analysis.The paper introduces the Knowledge Graph Large Language Model (KG-LLM) framework, which leverages large language models (LLMs) to enhance multi-hop link prediction in knowledge graphs (KGs). The framework converts KG data into natural language prompts, allowing LLMs to learn latent representations of entities and their relationships. By fine-tuning three leading LLMs—Flan-T5, Llama2, and Gemma—the framework improves the models' generalization capabilities and accuracy in predicting multi-hop links. The study highlights the effectiveness of Chain-of-Thought (CoT) reasoning and In-Context Learning (ICL) in enhancing model performance. Experimental results on real-world datasets show that the KG-LLM framework significantly improves multi-hop link prediction and relation prediction, both with and without ICL. The framework's ability to handle unseen prompts and its potential for generative multi-hop link prediction make it a promising solution for advanced KG analysis.
Reach us at info@study.space