Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability

Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability

26 Jun 2024 | Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas
The paper introduces Deductive Closure Training (DCT), a method to improve the coherence, accuracy, and updatability of language models (LMs). DCT leverages LMs to identify implications and contradictions within generated text, using these to fine-tune the model on more factually correct content. The method can be applied in supervised and unsupervised settings, depending on the source of seed documents. In supervised settings, DCT uses trusted sources to update models, while in unsupervised settings, it uses LMs themselves to generate seed documents. The effectiveness of DCT is demonstrated across three datasets: CREAK, MQuAKE, and "Reversal Curses." Supervised DCT improves fact verification and text generation accuracy by 3-26%, while unsupervised DCT improves verification accuracy by 12%. The results show that leveraging LMs' reasoning capabilities during inference can significantly enhance their reliability and adaptability.The paper introduces Deductive Closure Training (DCT), a method to improve the coherence, accuracy, and updatability of language models (LMs). DCT leverages LMs to identify implications and contradictions within generated text, using these to fine-tune the model on more factually correct content. The method can be applied in supervised and unsupervised settings, depending on the source of seed documents. In supervised settings, DCT uses trusted sources to update models, while in unsupervised settings, it uses LMs themselves to generate seed documents. The effectiveness of DCT is demonstrated across three datasets: CREAK, MQuAKE, and "Reversal Curses." Supervised DCT improves fact verification and text generation accuracy by 3-26%, while unsupervised DCT improves verification accuracy by 12%. The results show that leveraging LMs' reasoning capabilities during inference can significantly enhance their reliability and adaptability.
Reach us at info@study.space
[slides] Deductive Closure Training of Language Models for Coherence%2C Accuracy%2C and Updatability | StudySpace