Contextualization Distillation from Large Language Model for Knowledge Graph Completion

Contextualization Distillation from Large Language Model for Knowledge Graph Completion

24 Feb 2024 | Dawei Li1, Zhen Tan2, Tianlong Chen3, Huan Liu2
The paper introduces a novel approach called *Contextualization Distillation* to enhance the performance of knowledge graph completion (KGC) models. The method leverages large language models (LLMs) to transform structural triplets into context-rich segments, addressing the limitations of static and noisy corpora used in existing KGC models. Two auxiliary tasks—reconstruction and contextualization—are introduced to train smaller KGC models using these enriched triplets. Extensive experiments on various datasets and models demonstrate the effectiveness and adaptability of Contextualization Distillation, showing consistent performance improvements. The method is also analyzed to provide insights into generating path selection and suitable distillation tasks. The code and data are available at https://github.com/David-Li0406/Contextualization-Distillation.The paper introduces a novel approach called *Contextualization Distillation* to enhance the performance of knowledge graph completion (KGC) models. The method leverages large language models (LLMs) to transform structural triplets into context-rich segments, addressing the limitations of static and noisy corpora used in existing KGC models. Two auxiliary tasks—reconstruction and contextualization—are introduced to train smaller KGC models using these enriched triplets. Extensive experiments on various datasets and models demonstrate the effectiveness and adaptability of Contextualization Distillation, showing consistent performance improvements. The method is also analyzed to provide insights into generating path selection and suitable distillation tasks. The code and data are available at https://github.com/David-Li0406/Contextualization-Distillation.
Reach us at info@study.space