InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning

InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning

24 May 2024 | Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen, Dahua Lin
The paper introduces InternLM-Math, an open-source large language model (LLM) designed for mathematical reasoning. InternLM-Math is based on the InternLM2-Base model and is pre-trained on a diverse collection of high-quality data, including math corpora, domain-specific datasets, and synthetic data. The model is trained to perform various mathematical tasks, such as solving problems, verifying solutions, proving statements, and using code interpreters. It achieves state-of-the-art performance on multiple benchmarks, including GSM8K, MATH, Hungary math exam, MathBench-ZH, and MiniF2F. The paper also explores the use of the formal math language LEAN for solving and proving math problems, demonstrating the potential of using LEAN as a unified platform for math reasoning. The authors provide detailed descriptions of the pre-training data composition, training strategy, and evaluation results, highlighting the model's strengths and limitations. The paper concludes by discussing future directions, including improving chain-of-thought reasoning, adding self-critique capabilities, and enhancing process reward modeling.The paper introduces InternLM-Math, an open-source large language model (LLM) designed for mathematical reasoning. InternLM-Math is based on the InternLM2-Base model and is pre-trained on a diverse collection of high-quality data, including math corpora, domain-specific datasets, and synthetic data. The model is trained to perform various mathematical tasks, such as solving problems, verifying solutions, proving statements, and using code interpreters. It achieves state-of-the-art performance on multiple benchmarks, including GSM8K, MATH, Hungary math exam, MathBench-ZH, and MiniF2F. The paper also explores the use of the formal math language LEAN for solving and proving math problems, demonstrating the potential of using LEAN as a unified platform for math reasoning. The authors provide detailed descriptions of the pre-training data composition, training strategy, and evaluation results, highlighting the model's strengths and limitations. The paper concludes by discussing future directions, including improving chain-of-thought reasoning, adding self-critique capabilities, and enhancing process reward modeling.
Reach us at info@study.space
[slides] InternLM-Math%3A Open Math Large Language Models Toward Verifiable Reasoning | StudySpace