6 Jun 2024 | Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting Zhuang, Weiming Lu
The paper "Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives" explores the limitations of large language models (LLMs) in self-reflection without external feedback. It finds that LLMs often provide overconfident or inconsistent feedback, leading to poor reflection. To address this, the authors propose a method called Self-Contrast, which involves creating diverse solving perspectives, contrasting their differences, and summarizing these discrepancies into a checklist for re-examination. This approach helps LLMs identify and correct errors more accurately and stably. Experiments on various tasks, including mathematical reasoning and translation, demonstrate the effectiveness and generality of Self-Contrast, showing significant improvements over vanilla reflection methods. The paper also discusses the limitations of the approach, particularly for smaller-scale LLMs, and suggests future directions for improvement.The paper "Self-Contrast: Better Reflection Through Inconsistent Solving Perspectives" explores the limitations of large language models (LLMs) in self-reflection without external feedback. It finds that LLMs often provide overconfident or inconsistent feedback, leading to poor reflection. To address this, the authors propose a method called Self-Contrast, which involves creating diverse solving perspectives, contrasting their differences, and summarizing these discrepancies into a checklist for re-examination. This approach helps LLMs identify and correct errors more accurately and stably. Experiments on various tasks, including mathematical reasoning and translation, demonstrate the effectiveness and generality of Self-Contrast, showing significant improvements over vanilla reflection methods. The paper also discusses the limitations of the approach, particularly for smaller-scale LLMs, and suggests future directions for improvement.