KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion

KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion

23 Feb 2024 | Yanbin Wei, Qiushi Huang, Yu Zhang, James T. Kwok
**KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion** Knowledge Graph Completion (KGC) is crucial for addressing the incompleteness of knowledge graphs and supporting downstream applications. Existing KGC methods can be categorized into triple-based and text-based approaches. Triple-based methods struggle with long-tail entities due to limited structural information, while text-based methods require costly training and specific fine-tuning for different knowledge graphs. To address these limitations, the paper proposes KICGPT, a framework that integrates a large language model (LLM) and a triple-based KGC retriever. KICGPT uses an in-context learning strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no fine-tuning. **Contributions:** - Proposes KICGPT, a novel cost-effective framework for KGC tasks. - Introduces Knowledge Prompt, a novel in-context learning strategy specifically designed for KGC. - Achieves state-of-the-art performance with low training overhead. **Related Work:** - Discusses existing triple-based and text-based KGC methods and their limitations. - Reviews recent works on LLMs for KGs and in-context learning. **Methodology:** - Describes the problem setting and the overall framework of KICGPT. - Details the Knowledge Prompt strategy, including the construction of demonstration pools and the ordering of demonstrations. - Explains the prompt engineering process and text self-alignment for KG text cleaning. **Experiments:** - Evaluates KICGPT on benchmark datasets (FB15k-237 and WN18RR) using various baselines. - Conducts ablation studies to demonstrate the effectiveness of each component in KICGPT. - Analyzes the performance on long-tail entities. **Conclusion:** - Summarizes the key contributions and advantages of KICGPT. - Discusses limitations and future work.**KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion** Knowledge Graph Completion (KGC) is crucial for addressing the incompleteness of knowledge graphs and supporting downstream applications. Existing KGC methods can be categorized into triple-based and text-based approaches. Triple-based methods struggle with long-tail entities due to limited structural information, while text-based methods require costly training and specific fine-tuning for different knowledge graphs. To address these limitations, the paper proposes KICGPT, a framework that integrates a large language model (LLM) and a triple-based KGC retriever. KICGPT uses an in-context learning strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no fine-tuning. **Contributions:** - Proposes KICGPT, a novel cost-effective framework for KGC tasks. - Introduces Knowledge Prompt, a novel in-context learning strategy specifically designed for KGC. - Achieves state-of-the-art performance with low training overhead. **Related Work:** - Discusses existing triple-based and text-based KGC methods and their limitations. - Reviews recent works on LLMs for KGs and in-context learning. **Methodology:** - Describes the problem setting and the overall framework of KICGPT. - Details the Knowledge Prompt strategy, including the construction of demonstration pools and the ordering of demonstrations. - Explains the prompt engineering process and text self-alignment for KG text cleaning. **Experiments:** - Evaluates KICGPT on benchmark datasets (FB15k-237 and WN18RR) using various baselines. - Conducts ablation studies to demonstrate the effectiveness of each component in KICGPT. - Analyzes the performance on long-tail entities. **Conclusion:** - Summarizes the key contributions and advantages of KICGPT. - Discusses limitations and future work.
Reach us at info@study.space