17 Feb 2024 | Jiateng Liu, Pengfei Yu, Yuji Zhang, Sha Li, Zixuan Zhang, Heng Ji
The paper "EvEdit: Event-based Knowledge Editing with Deductive Editing Boundaries" addresses the issue of uncertain editing boundaries in knowledge editing (KE) for large language models (LLMs). The authors identify that current KE approaches, which typically operate on (subject, relation, object) triples, often ignore contextual information and the relationships between different pieces of knowledge, leading to ambiguous editing boundaries and uncertainty in edited models. To address this, they introduce the concept of a *deduction anchor*, which is a set of pre-edit knowledge that remains unchanged during the editing process, aiding in logical deduction. They propose *event-based knowledge editing*, which pairs facts with event descriptions to provide a more realistic and logically sound setting. This approach not only simulates real-world editing scenarios better but also implicitly defines the deduction anchor, addressing the issue of indeterminate editing boundaries.
The authors also introduce a novel approach called *Self-Edit*, which uses the pre-edit language model to generate relevant question-answer pairs and fine-tunes the model on these instances, improving the consistency and naturalness of the edited models. Empirical results demonstrate that event-based editing significantly reduces uncertainty compared to existing methods, and Self-Edit outperforms other approaches, achieving a 55.6% improvement in factual consistency while maintaining the naturalness of generation.
The paper concludes by advocating for further research in this more pragmatic, event-based knowledge editing setting, highlighting its potential to enhance the trustworthiness and reliability of language models.The paper "EvEdit: Event-based Knowledge Editing with Deductive Editing Boundaries" addresses the issue of uncertain editing boundaries in knowledge editing (KE) for large language models (LLMs). The authors identify that current KE approaches, which typically operate on (subject, relation, object) triples, often ignore contextual information and the relationships between different pieces of knowledge, leading to ambiguous editing boundaries and uncertainty in edited models. To address this, they introduce the concept of a *deduction anchor*, which is a set of pre-edit knowledge that remains unchanged during the editing process, aiding in logical deduction. They propose *event-based knowledge editing*, which pairs facts with event descriptions to provide a more realistic and logically sound setting. This approach not only simulates real-world editing scenarios better but also implicitly defines the deduction anchor, addressing the issue of indeterminate editing boundaries.
The authors also introduce a novel approach called *Self-Edit*, which uses the pre-edit language model to generate relevant question-answer pairs and fine-tunes the model on these instances, improving the consistency and naturalness of the edited models. Empirical results demonstrate that event-based editing significantly reduces uncertainty compared to existing methods, and Self-Edit outperforms other approaches, achieving a 55.6% improvement in factual consistency while maintaining the naturalness of generation.
The paper concludes by advocating for further research in this more pragmatic, event-based knowledge editing setting, highlighting its potential to enhance the trustworthiness and reliability of language models.