Code Pretraining Improves Entity Tracking Abilities of Language Models

Code Pretraining Improves Entity Tracking Abilities of Language Models

31 May 2024 | Najoung Kim*, Sebastian Schuster*, Shubham Toshniwal*
The paper investigates the impact of pretraining on code, math, and alignment tuning on the entity tracking abilities of language models. The authors systematically compare pairs of models, one trained on base data and the other on additional code data, to test the hypothesis that code pretraining improves entity tracking. They also examine the effects of math training and alignment tuning. The results show that models trained on additional code data consistently outperform base models in entity tracking tasks, particularly in non-trivial cases. However, additional math training does not yield consistent improvements, and alignment tuning has mixed effects, with base models benefiting more from alignment tuning than code models. The study concludes that code pretraining significantly enhances entity tracking abilities, while math training and alignment tuning have limited benefits.The paper investigates the impact of pretraining on code, math, and alignment tuning on the entity tracking abilities of language models. The authors systematically compare pairs of models, one trained on base data and the other on additional code data, to test the hypothesis that code pretraining improves entity tracking. They also examine the effects of math training and alignment tuning. The results show that models trained on additional code data consistently outperform base models in entity tracking tasks, particularly in non-trivial cases. However, additional math training does not yield consistent improvements, and alignment tuning has mixed effects, with base models benefiting more from alignment tuning than code models. The study concludes that code pretraining significantly enhances entity tracking abilities, while math training and alignment tuning have limited benefits.
Reach us at info@study.space
Understanding Code Pretraining Improves Entity Tracking Abilities of Language Models