29 Feb 2024 | Xiaobao Wu, Liangming Pan, William Yang Wang, Anh Tuan Luu
This paper introduces Unstructured Knowledge Editing (UKE), a new benchmark for knowledge editing that evaluates models using unstructured facts rather than structured triplets. Current knowledge editing benchmarks rely on structured facts, which are curated and precise, but real-world knowledge updates often appear in unstructured texts like news articles and Wikipedia. UKE directly uses unstructured texts as knowledge updates, making it more practical and aligned with real-world scenarios. The paper constructs new datasets, including COUNTERFACT (counterfactual updates), MQUAKE-CF (multi-hop editing), and WIKIUPDATE (real-world updates), to evaluate knowledge editing methods. Experiments show that existing methods struggle with unstructured facts, experiencing significant performance declines. In-context learning methods perform relatively better, as they can reason with unstructured facts as background knowledge. Extracted triplets from unstructured facts help some methods but are often inaccurate. Real-world updates in WIKIUPDATE are particularly challenging, leading to greater performance drops. The paper highlights the need for more practical knowledge editing methods that can handle unstructured facts effectively. Future work may focus on improving in-context learning and addressing the challenges of handling complex real-world updates.This paper introduces Unstructured Knowledge Editing (UKE), a new benchmark for knowledge editing that evaluates models using unstructured facts rather than structured triplets. Current knowledge editing benchmarks rely on structured facts, which are curated and precise, but real-world knowledge updates often appear in unstructured texts like news articles and Wikipedia. UKE directly uses unstructured texts as knowledge updates, making it more practical and aligned with real-world scenarios. The paper constructs new datasets, including COUNTERFACT (counterfactual updates), MQUAKE-CF (multi-hop editing), and WIKIUPDATE (real-world updates), to evaluate knowledge editing methods. Experiments show that existing methods struggle with unstructured facts, experiencing significant performance declines. In-context learning methods perform relatively better, as they can reason with unstructured facts as background knowledge. Extracted triplets from unstructured facts help some methods but are often inaccurate. Real-world updates in WIKIUPDATE are particularly challenging, leading to greater performance drops. The paper highlights the need for more practical knowledge editing methods that can handle unstructured facts effectively. Future work may focus on improving in-context learning and addressing the challenges of handling complex real-world updates.