21 Feb 2024 | Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen
This paper proposes GLAME, a novel method for enhancing large language model (LLM) editing by integrating knowledge graphs. The method addresses the challenge of updating LLMs with new knowledge while preserving the generalization ability of the model. GLAME consists of two key components: a Knowledge Graph Augmentation (KGA) module and a Graph-based Knowledge Edit (GKE) module. The KGA module constructs a subgraph that captures the new associations resulting from the edit, while the GKE module integrates this subgraph into the LLM's parameter editing process. This allows the model to effectively incorporate changes in associated knowledge, improving the model's ability to utilize edited knowledge.
The KGA module uses an external knowledge graph to construct a subgraph that captures the new associations caused by the edit. It then extracts hidden vectors of entities and relations from the LLM to initialize the subgraph representations. The GKE module then uses a relational graph neural network (RGNN) to propagate and aggregate information within the subgraph, incorporating the new knowledge associations into the parameter editing process. This enables the model to effectively update the parameters to reflect the changes in both the edited knowledge and the associated knowledge.
Experiments on GPT-J and GPT-2 XL demonstrate that GLAME significantly improves the generalization capabilities of post-edit LLMs in employing edited knowledge. GLAME outperforms existing editing methods in terms of performance metrics such as Efficacy Score, Paraphrase Score, and Neighborhood Score. The method is also shown to be effective in multi-hop reasoning tasks, where the model must reason across multiple steps to answer questions based on edited knowledge. The results indicate that GLAME is able to effectively incorporate external knowledge graphs into the editing process, leading to improved performance and generalization ability.This paper proposes GLAME, a novel method for enhancing large language model (LLM) editing by integrating knowledge graphs. The method addresses the challenge of updating LLMs with new knowledge while preserving the generalization ability of the model. GLAME consists of two key components: a Knowledge Graph Augmentation (KGA) module and a Graph-based Knowledge Edit (GKE) module. The KGA module constructs a subgraph that captures the new associations resulting from the edit, while the GKE module integrates this subgraph into the LLM's parameter editing process. This allows the model to effectively incorporate changes in associated knowledge, improving the model's ability to utilize edited knowledge.
The KGA module uses an external knowledge graph to construct a subgraph that captures the new associations caused by the edit. It then extracts hidden vectors of entities and relations from the LLM to initialize the subgraph representations. The GKE module then uses a relational graph neural network (RGNN) to propagate and aggregate information within the subgraph, incorporating the new knowledge associations into the parameter editing process. This enables the model to effectively update the parameters to reflect the changes in both the edited knowledge and the associated knowledge.
Experiments on GPT-J and GPT-2 XL demonstrate that GLAME significantly improves the generalization capabilities of post-edit LLMs in employing edited knowledge. GLAME outperforms existing editing methods in terms of performance metrics such as Efficacy Score, Paraphrase Score, and Neighborhood Score. The method is also shown to be effective in multi-hop reasoning tasks, where the model must reason across multiple steps to answer questions based on edited knowledge. The results indicate that GLAME is able to effectively incorporate external knowledge graphs into the editing process, leading to improved performance and generalization ability.