30 Mar 2024 | Ben Zhou, Hongming Zhang, Sihao Chen, Dian Yu, Hongwei Wang, Baolin Peng, Dan Roth, Dong Yu
The paper "Conceptual and Unbiased Reasoning in Language Models" by Ben Zhou, Hongming Zhang, Sihao Chen, Dian Yu, Hongwei Wang, Baolin Peng, Dan Roth, and Dong Yu explores the ability of large language models (LLMs) to perform conceptual reasoning, which is crucial for generalization in human cognition. The authors propose a novel conceptualization framework that forces models to reason abstractly and generate solutions in a verifiable symbolic space. Using this framework, they demonstrate that existing LLMs fall short in conceptual reasoning, with performance drops ranging from 9% to 28% compared to direct inference methods. To improve these models, they introduce two techniques: generating familiar questions with similar underlying reasoning paths and asking models to self-refine their solutions. Experiments show that these techniques enhance conceptual reasoning performance by 8% to 11%, leading to a more robust and unbiased reasoning system. The paper also discusses the importance of high-level abstract reasoning for unbiased and generalizable decision-making, highlighting the limitations of LLMs that rely heavily on induction-based inference.The paper "Conceptual and Unbiased Reasoning in Language Models" by Ben Zhou, Hongming Zhang, Sihao Chen, Dian Yu, Hongwei Wang, Baolin Peng, Dan Roth, and Dong Yu explores the ability of large language models (LLMs) to perform conceptual reasoning, which is crucial for generalization in human cognition. The authors propose a novel conceptualization framework that forces models to reason abstractly and generate solutions in a verifiable symbolic space. Using this framework, they demonstrate that existing LLMs fall short in conceptual reasoning, with performance drops ranging from 9% to 28% compared to direct inference methods. To improve these models, they introduce two techniques: generating familiar questions with similar underlying reasoning paths and asking models to self-refine their solutions. Experiments show that these techniques enhance conceptual reasoning performance by 8% to 11%, leading to a more robust and unbiased reasoning system. The paper also discusses the importance of high-level abstract reasoning for unbiased and generalizable decision-making, highlighting the limitations of LLMs that rely heavily on induction-based inference.