Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

9 Aug 2024 | Dong shu, Tianle Chen, Mingyu Jin, Chong Zhang, Mengnan Du, Yongfeng Zhang
This paper introduces the Knowledge Graph Large Language Model (KG-LLM), a novel framework for multi-hop link prediction in knowledge graphs (KGs). The KG-LLM leverages large language models (LLMs) to convert structured KG data into natural language prompts, enabling LLMs to learn latent representations of entities and their relationships. The framework fine-tunes three leading LLMs—Flan-T5, Llama2, and Gemma—to enhance multi-hop link prediction performance. It also explores the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models' generalization capabilities, leading to more accurate predictions in unfamiliar scenarios. The framework also incorporates Chain-of-Thought (CoT) reasoning and In-Context Learning (ICL) to enhance model performance and enable models to handle unseen prompts. The KG-LLM framework is evaluated on four real-world KG datasets (WN18RR, NELL-995, FB15k-237, YAGO3-10) and demonstrates superior performance in multi-hop link prediction tasks, particularly when ICL is used. The framework also shows promise in multi-hop relation prediction tasks, with models achieving high accuracy in predicting relationships between entities. The study highlights the effectiveness of the KG-LLM framework in addressing the challenges of multi-hop link prediction in KGs.This paper introduces the Knowledge Graph Large Language Model (KG-LLM), a novel framework for multi-hop link prediction in knowledge graphs (KGs). The KG-LLM leverages large language models (LLMs) to convert structured KG data into natural language prompts, enabling LLMs to learn latent representations of entities and their relationships. The framework fine-tunes three leading LLMs—Flan-T5, Llama2, and Gemma—to enhance multi-hop link prediction performance. It also explores the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models' generalization capabilities, leading to more accurate predictions in unfamiliar scenarios. The framework also incorporates Chain-of-Thought (CoT) reasoning and In-Context Learning (ICL) to enhance model performance and enable models to handle unseen prompts. The KG-LLM framework is evaluated on four real-world KG datasets (WN18RR, NELL-995, FB15k-237, YAGO3-10) and demonstrates superior performance in multi-hop link prediction tasks, particularly when ICL is used. The framework also shows promise in multi-hop relation prediction tasks, with models achieving high accuracy in predicting relationships between entities. The study highlights the effectiveness of the KG-LLM framework in addressing the challenges of multi-hop link prediction in KGs.
Reach us at info@study.space