Large Language Models for Mathematical Reasoning: Progresses and Challenges

Large Language Models for Mathematical Reasoning: Progresses and Challenges

5 Apr 2024 | Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin
This paper provides a comprehensive survey of the progress and challenges in using Large Language Models (LLMs) for mathematical reasoning. It explores various mathematical problem types, including arithmetic, math word problems, geometry, automated theorem proving, and math in vision-language contexts. The paper also discusses the methodologies used to enhance LLMs for mathematical reasoning, such as prompting, strategies to improve performance, and fine-tuning. It highlights the factors influencing LLMs in solving math problems, including tokenization, pre-training, prompting techniques, model scale, and the impact of instruction tuning. The paper also addresses the challenges in LLMs for mathematical reasoning, such as their brittleness, limited generalization, and the need for human-centric approaches in math education. The survey emphasizes the importance of understanding the limitations of LLMs and the need for further research to improve their performance in mathematical reasoning. It concludes by highlighting the potential of LLMs in mathematical education and the need for a balanced approach that incorporates human factors to ensure effective and meaningful learning outcomes.This paper provides a comprehensive survey of the progress and challenges in using Large Language Models (LLMs) for mathematical reasoning. It explores various mathematical problem types, including arithmetic, math word problems, geometry, automated theorem proving, and math in vision-language contexts. The paper also discusses the methodologies used to enhance LLMs for mathematical reasoning, such as prompting, strategies to improve performance, and fine-tuning. It highlights the factors influencing LLMs in solving math problems, including tokenization, pre-training, prompting techniques, model scale, and the impact of instruction tuning. The paper also addresses the challenges in LLMs for mathematical reasoning, such as their brittleness, limited generalization, and the need for human-centric approaches in math education. The survey emphasizes the importance of understanding the limitations of LLMs and the need for further research to improve their performance in mathematical reasoning. It concludes by highlighting the potential of LLMs in mathematical education and the need for a balanced approach that incorporates human factors to ensure effective and meaningful learning outcomes.
Reach us at info@study.space
[slides and audio] Large Language Models for Mathematical Reasoning%3A Progresses and Challenges