5 Jun 2024 | Chenhui Hu1,2, Pengfei Cao1,2, Yubo Chen1,2*, Kang Liu1,2, Jun Zhao1,2*
The paper "WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing" addresses the challenges of knowledge editing in large language models (LLMs) for lifelong learning. Current knowledge editing methods, such as ROME and MEMIT, suffer from performance degradation in lifelong editing due to toxicity buildup and toxicity flash, primarily caused by pattern unmatch. To mitigate these issues, the authors propose the Wise-Layer Knowledge Editor (WilKE), which selects the editing layer based on the pattern matching degree of the knowledge across different layers. Experimental results on GPT2-XL and GPT-J show that WilKE achieves an average improvement of 46.2% and 67.8%, respectively, compared to state-of-the-art methods. The paper also discusses the limitations and ethical considerations of the proposed method, emphasizing the need for further research on knowledge storage and the potential misuse of edited models.The paper "WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing" addresses the challenges of knowledge editing in large language models (LLMs) for lifelong learning. Current knowledge editing methods, such as ROME and MEMIT, suffer from performance degradation in lifelong editing due to toxicity buildup and toxicity flash, primarily caused by pattern unmatch. To mitigate these issues, the authors propose the Wise-Layer Knowledge Editor (WilKE), which selects the editing layer based on the pattern matching degree of the knowledge across different layers. Experimental results on GPT2-XL and GPT-J show that WilKE achieves an average improvement of 46.2% and 67.8%, respectively, compared to state-of-the-art methods. The paper also discusses the limitations and ethical considerations of the proposed method, emphasizing the need for further research on knowledge storage and the potential misuse of edited models.