Contextualization Distillation from Large Language Model for Knowledge Graph Completion

Contextualization Distillation from Large Language Model for Knowledge Graph Completion

24 Feb 2024 | Dawei Li, Zhen Tan, Tianlong Chen, Huan Liu
This paper introduces Contextualization Distillation, a method to enhance smaller knowledge graph completion (KGC) models by leveraging contextual information generated from large language models (LLMs). The method addresses the limitations of existing KGC corpora, which are often static and noisy, by extracting dynamic, high-quality context from LLMs. The approach involves two auxiliary tasks: reconstruction and contextualization, which help smaller KGC models learn from enriched triplet information. The method is compatible with both discriminative and generative KGC frameworks and has been evaluated on multiple datasets, showing consistent performance improvements. The paper also provides an in-depth analysis of the method, explaining how it enhances the performance of KGC models by generating more informative and coherent contextual information. The results demonstrate that Contextualization Distillation is effective in improving the performance of smaller KGC models, regardless of the underlying architecture or pipeline. The method is also shown to be more explainable, as it provides insights into generating path selection and choosing suitable distillation tasks. The code and data for this work are available at the provided GitHub repository.This paper introduces Contextualization Distillation, a method to enhance smaller knowledge graph completion (KGC) models by leveraging contextual information generated from large language models (LLMs). The method addresses the limitations of existing KGC corpora, which are often static and noisy, by extracting dynamic, high-quality context from LLMs. The approach involves two auxiliary tasks: reconstruction and contextualization, which help smaller KGC models learn from enriched triplet information. The method is compatible with both discriminative and generative KGC frameworks and has been evaluated on multiple datasets, showing consistent performance improvements. The paper also provides an in-depth analysis of the method, explaining how it enhances the performance of KGC models by generating more informative and coherent contextual information. The results demonstrate that Contextualization Distillation is effective in improving the performance of smaller KGC models, regardless of the underlying architecture or pipeline. The method is also shown to be more explainable, as it provides insights into generating path selection and choosing suitable distillation tasks. The code and data for this work are available at the provided GitHub repository.
Reach us at info@study.space
[slides] Contextualization Distillation from Large Language Model for Knowledge Graph Completion | StudySpace