Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts

Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts

21 May 2024 | Baolong Bi, Shenghua Liu, Lingrui Mei, Yiwei Wang, Pengliang Ji, Xueqi Cheng
The paper "Decoding by Contrasting Knowledge: Enhancing LLMs’ Confidence on Edited Facts" addresses the issue of outdated knowledge in large language models (LLMs) and proposes a novel approach called Decoding by Contrasting Knowledge (DeCK) to enhance the performance of in-context editing (ICE). The authors observe that while ICE significantly boosts the confidence of LLMs in new knowledge, it struggles with stubborn knowledge—facts that have gained excessive confidence during pretraining and are difficult to edit effectively. DeCK aims to overcome this challenge by contrasting the logits of new knowledge with those of parametric knowledge, thereby amplifying the changes brought about by in-context editing. Experimental results on the MQuAKE dataset demonstrate that DeCK can enhance the accuracy of ICE, particularly in editing stubborn knowledge, with improvements up to 219% in the LLaMA3-8B-INSTRUCT model. The paper also includes a detailed analysis of the impact of DeCK on token-level distributions and a comprehensive evaluation of its effectiveness in various scenarios. Overall, DeCK provides a promising solution for improving the foundational capabilities of LLMs in knowledge editing.The paper "Decoding by Contrasting Knowledge: Enhancing LLMs’ Confidence on Edited Facts" addresses the issue of outdated knowledge in large language models (LLMs) and proposes a novel approach called Decoding by Contrasting Knowledge (DeCK) to enhance the performance of in-context editing (ICE). The authors observe that while ICE significantly boosts the confidence of LLMs in new knowledge, it struggles with stubborn knowledge—facts that have gained excessive confidence during pretraining and are difficult to edit effectively. DeCK aims to overcome this challenge by contrasting the logits of new knowledge with those of parametric knowledge, thereby amplifying the changes brought about by in-context editing. Experimental results on the MQuAKE dataset demonstrate that DeCK can enhance the accuracy of ICE, particularly in editing stubborn knowledge, with improvements up to 219% in the LLaMA3-8B-INSTRUCT model. The paper also includes a detailed analysis of the impact of DeCK on token-level distributions and a comprehensive evaluation of its effectiveness in various scenarios. Overall, DeCK provides a promising solution for improving the foundational capabilities of LLMs in knowledge editing.
Reach us at info@study.space
Understanding Decoding by Contrasting Knowledge%3A Enhancing LLMs' Confidence on Edited Facts