TIMO: Towards Better Temporal Reasoning for Language Models

TIMO: Towards Better Temporal Reasoning for Language Models

2024 | Zhaochen Su, Jun Zhang, Tong Zhu, Xiaoye Qu, Juntao Li, Min Zhang, Yu Cheng
The paper "TIMO: Towards Better Temporal Reasoning for Language Models" addresses the challenge of enhancing temporal reasoning capabilities in Large Language Models (LLMs). The authors systematically study 38 temporal reasoning tasks and identify 19 tasks that are directly related to mathematics. They propose a self-critic temporal optimization method to enhance the model's temporal reasoning abilities without compromising general task performance. The proposed framework, TIMO, is designed to excel in temporal reasoning at both 7B and 13B scales, achieving state-of-the-art (SOTA) performance on average accuracy scores. Extensive experiments validate the effectiveness of the framework across diverse temporal tasks, demonstrating its robustness and generalization capabilities. The code for TIMO is available at <https://github.com/zhaochen0110/Timo>.The paper "TIMO: Towards Better Temporal Reasoning for Language Models" addresses the challenge of enhancing temporal reasoning capabilities in Large Language Models (LLMs). The authors systematically study 38 temporal reasoning tasks and identify 19 tasks that are directly related to mathematics. They propose a self-critic temporal optimization method to enhance the model's temporal reasoning abilities without compromising general task performance. The proposed framework, TIMO, is designed to excel in temporal reasoning at both 7B and 13B scales, achieving state-of-the-art (SOTA) performance on average accuracy scores. Extensive experiments validate the effectiveness of the framework across diverse temporal tasks, demonstrating its robustness and generalization capabilities. The code for TIMO is available at <https://github.com/zhaochen0110/Timo>.
Reach us at info@study.space