29 Feb 2024 | Xiaobao Wu, Liangming Pan, William Yang Wang, Anh Tuan Luu
The paper "Updating Language Models with Unstructured Facts: Towards Practical Knowledge Editing" addresses the challenge of updating language models with unstructured knowledge updates, which are more common in real-world texts like news articles and Wikipedia pages. Current evaluation strategies for knowledge editing methods focus on structured facts, which are well-curated triplets with subjects, relations, and objects. However, these methods struggle with unstructured facts, which are more implicit and complex. To address this issue, the authors propose a new benchmark called Unstructured Knowledge Editing (UKE), which evaluates the performance of knowledge editing methods using unstructured texts as updates.
The paper constructs new datasets for UKE, including counterfactual updates and real-world updates from Wikipedia. Extensive experiments show that current state-of-the-art knowledge editing methods perform poorly on UKE, with significant performance declines compared to structured facts. Even when extracting triplets from unstructured facts, these methods still struggle. The results highlight the need for more practical and responsive knowledge editing methods.
The authors also analyze the challenges posed by unstructured facts, such as their implicit nature and complexity, and discuss the limitations of their work, including the reliance on Wikipedia articles and the need for further diversity in data sources. The paper concludes by emphasizing the importance of future research in improving in-context learning methods for practical knowledge editing.The paper "Updating Language Models with Unstructured Facts: Towards Practical Knowledge Editing" addresses the challenge of updating language models with unstructured knowledge updates, which are more common in real-world texts like news articles and Wikipedia pages. Current evaluation strategies for knowledge editing methods focus on structured facts, which are well-curated triplets with subjects, relations, and objects. However, these methods struggle with unstructured facts, which are more implicit and complex. To address this issue, the authors propose a new benchmark called Unstructured Knowledge Editing (UKE), which evaluates the performance of knowledge editing methods using unstructured texts as updates.
The paper constructs new datasets for UKE, including counterfactual updates and real-world updates from Wikipedia. Extensive experiments show that current state-of-the-art knowledge editing methods perform poorly on UKE, with significant performance declines compared to structured facts. Even when extracting triplets from unstructured facts, these methods still struggle. The results highlight the need for more practical and responsive knowledge editing methods.
The authors also analyze the challenges posed by unstructured facts, such as their implicit nature and complexity, and discuss the limitations of their work, including the reliance on Wikipedia articles and the need for further diversity in data sources. The paper concludes by emphasizing the importance of future research in improving in-context learning methods for practical knowledge editing.