7 May 2024 | Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, Yanhe Liu
The paper "Towards Continual Knowledge Graph Embedding via Incremental Distillation" addresses the challenge of updating knowledge graph embeddings (KGE) as new knowledge emerges while preserving existing knowledge. Traditional KGE methods require significant training costs when new knowledge is added, leading to the development of continual KGE (CKGE) methods. However, existing CKGE methods often neglect the explicit graph structure in knowledge graphs, which is crucial for effective learning and knowledge preservation.
To tackle this issue, the authors propose a novel method called Incremental Distillation for Continual Knowledge Graph Embedding (IncDE). IncDE leverages hierarchical ordering to optimize the learning sequence of new triples, ensuring that important entities and relations are learned first. It also introduces an incremental distillation mechanism to preserve old knowledge by facilitating the seamless transfer of entity representations from previous layers to new layers. Additionally, a two-stage training strategy is employed to prevent the over-corruption of old knowledge by under-trained new knowledge.
Experimental results on various datasets demonstrate that IncDE outperforms state-of-the-art baselines, achieving improvements in mean reciprocal rank (MRR) scores of 0.2\%-6.5\%. Ablation experiments further validate the effectiveness of each component of IncDE, highlighting the importance of hierarchical ordering, incremental distillation, and the two-stage training strategy. The paper concludes by discussing the novelty and significance of IncDE, emphasizing its ability to efficiently learn emerging knowledge while preserving old knowledge effectively.The paper "Towards Continual Knowledge Graph Embedding via Incremental Distillation" addresses the challenge of updating knowledge graph embeddings (KGE) as new knowledge emerges while preserving existing knowledge. Traditional KGE methods require significant training costs when new knowledge is added, leading to the development of continual KGE (CKGE) methods. However, existing CKGE methods often neglect the explicit graph structure in knowledge graphs, which is crucial for effective learning and knowledge preservation.
To tackle this issue, the authors propose a novel method called Incremental Distillation for Continual Knowledge Graph Embedding (IncDE). IncDE leverages hierarchical ordering to optimize the learning sequence of new triples, ensuring that important entities and relations are learned first. It also introduces an incremental distillation mechanism to preserve old knowledge by facilitating the seamless transfer of entity representations from previous layers to new layers. Additionally, a two-stage training strategy is employed to prevent the over-corruption of old knowledge by under-trained new knowledge.
Experimental results on various datasets demonstrate that IncDE outperforms state-of-the-art baselines, achieving improvements in mean reciprocal rank (MRR) scores of 0.2\%-6.5\%. Ablation experiments further validate the effectiveness of each component of IncDE, highlighting the importance of hierarchical ordering, incremental distillation, and the two-stage training strategy. The paper concludes by discussing the novelty and significance of IncDE, emphasizing its ability to efficiently learn emerging knowledge while preserving old knowledge effectively.