Case-Based or Rule-Based: How Do Transformers Do the Math?

Case-Based or Rule-Based: How Do Transformers Do the Math?

2024 | Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang
The paper explores the reasoning mechanisms of large language models (LLMs) in solving math problems, specifically focusing on whether they use rule-based or case-based reasoning. Through intervention experiments on five math tasks, the authors confirm that LLMs perform case-based reasoning, relying on similar cases seen in the training corpus rather than learning systematic rules. To address this limitation, they propose Rule-Following Fine-Tuning (RFFT), a technique that explicitly teaches LLMs to follow rules step-by-step. This method enables LLMs to generalize to longer addition problems with over 95% accuracy, significantly outperforming scratchpad methods. The study highlights the importance of rule-based reasoning for systematic generalization and demonstrates the effectiveness of RFFT in enhancing LLMs' reasoning capabilities.The paper explores the reasoning mechanisms of large language models (LLMs) in solving math problems, specifically focusing on whether they use rule-based or case-based reasoning. Through intervention experiments on five math tasks, the authors confirm that LLMs perform case-based reasoning, relying on similar cases seen in the training corpus rather than learning systematic rules. To address this limitation, they propose Rule-Following Fine-Tuning (RFFT), a technique that explicitly teaches LLMs to follow rules step-by-step. This method enables LLMs to generalize to longer addition problems with over 95% accuracy, significantly outperforming scratchpad methods. The study highlights the importance of rule-based reasoning for systematic generalization and demonstrates the effectiveness of RFFT in enhancing LLMs' reasoning capabilities.
Reach us at info@study.space
[slides and audio] Case-Based or Rule-Based%3A How Do Transformers Do the Math%3F