KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion

KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion

23 Feb 2024 | Yanbin Wei, Qiushi Huang, Yu Zhang, James T. Kwok
KICGPT is a novel framework that integrates a large language model (LLM) with a triple-based knowledge graph completion (KGC) retriever to address the challenge of incomplete knowledge graphs. The framework leverages in-context learning (ICL) through a strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. This approach allows KICGPT to effectively handle long-tail entities without additional training overhead. The retriever first generates a ranked list of entities based on retrieval scores, and the LLM then performs re-ranking on the top m entities using the Knowledge Prompt strategy. This method achieves state-of-the-art performance on benchmark datasets with minimal training overhead and no fine-tuning. KICGPT also incorporates text self-alignment to improve the clarity and understanding of relation descriptions, enhancing the performance of the LLM in link prediction tasks. Experimental results show that KICGPT outperforms existing triple-based and text-based methods, demonstrating the effectiveness of combining LLMs with traditional KGC methods. The framework is efficient, requiring no fine-tuning and achieving significant performance improvements on both FB15k-237 and WN18RR datasets. The proposed method also shows strong performance on long-tail entities, confirming the benefits of integrating LLMs with knowledge graphs for KGC tasks.KICGPT is a novel framework that integrates a large language model (LLM) with a triple-based knowledge graph completion (KGC) retriever to address the challenge of incomplete knowledge graphs. The framework leverages in-context learning (ICL) through a strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. This approach allows KICGPT to effectively handle long-tail entities without additional training overhead. The retriever first generates a ranked list of entities based on retrieval scores, and the LLM then performs re-ranking on the top m entities using the Knowledge Prompt strategy. This method achieves state-of-the-art performance on benchmark datasets with minimal training overhead and no fine-tuning. KICGPT also incorporates text self-alignment to improve the clarity and understanding of relation descriptions, enhancing the performance of the LLM in link prediction tasks. Experimental results show that KICGPT outperforms existing triple-based and text-based methods, demonstrating the effectiveness of combining LLMs with traditional KGC methods. The framework is efficient, requiring no fine-tuning and achieving significant performance improvements on both FB15k-237 and WN18RR datasets. The proposed method also shows strong performance on long-tail entities, confirming the benefits of integrating LLMs with knowledge graphs for KGC tasks.
Reach us at info@study.space
[slides] KICGPT%3A Large Language Model with Knowledge in Context for Knowledge Graph Completion | StudySpace