19 Aug 2024 | Zhaochen Su1*, Jun Zhang1, Tong Zhu1, Xiaoye Qu2, Juntao Li1†, Min Zhang1, Yu Cheng3
The paper "TIMO: Towards Better Temporal Reasoning for Language Models" addresses the challenge of enhancing temporal reasoning capabilities in Large Language Models (LLMs). The authors systematically study 38 temporal reasoning tasks and identify 19 tasks that are directly related to mathematics. They propose a self-critic temporal optimization method to enhance the model's temporal reasoning abilities without compromising general task performance. The proposed framework, TIMO, is designed to excel in temporal reasoning at both 7B and 13B scales, achieving state-of-the-art (SOTA) performance on average accuracy scores. Extensive experiments validate the effectiveness of the framework across diverse temporal tasks, demonstrating its robustness and generalization capabilities. The code for TIMO is available at <https://github.com/zhaochen0110/Timo>.The paper "TIMO: Towards Better Temporal Reasoning for Language Models" addresses the challenge of enhancing temporal reasoning capabilities in Large Language Models (LLMs). The authors systematically study 38 temporal reasoning tasks and identify 19 tasks that are directly related to mathematics. They propose a self-critic temporal optimization method to enhance the model's temporal reasoning abilities without compromising general task performance. The proposed framework, TIMO, is designed to excel in temporal reasoning at both 7B and 13B scales, achieving state-of-the-art (SOTA) performance on average accuracy scores. Extensive experiments validate the effectiveness of the framework across diverse temporal tasks, demonstrating its robustness and generalization capabilities. The code for TIMO is available at <https://github.com/zhaochen0110/Timo>.