(Ir)rationality and Cognitive Biases in Large Language Models

(Ir)rationality and Cognitive Biases in Large Language Models

15 Feb 2024 | Olivia Macmillan-Scott, Mirco Musolesi
This paper evaluates the rationality and cognitive biases of seven large language models (LLMs) using tasks from cognitive psychology. The authors find that while LLMs exhibit irrationality, this differs from human-like biases. The models' responses are highly inconsistent, with the same model providing both correct and incorrect answers, and responses varying across runs. Most incorrect responses lack human-like biases, indicating different types of irrationality. The study also highlights the importance of assessing LLMs' rational reasoning capabilities, particularly in critical applications. The paper contributes methodologically by providing a framework to evaluate and compare LLMs' rational reasoning abilities, using tasks originally designed for human subjects. The results suggest that LLMs do not fail in the same ways as humans, and their performance on mathematical tasks is generally lower than on non-mathematical ones. The paper concludes by discussing the implications for safety and the potential for further research in this area.This paper evaluates the rationality and cognitive biases of seven large language models (LLMs) using tasks from cognitive psychology. The authors find that while LLMs exhibit irrationality, this differs from human-like biases. The models' responses are highly inconsistent, with the same model providing both correct and incorrect answers, and responses varying across runs. Most incorrect responses lack human-like biases, indicating different types of irrationality. The study also highlights the importance of assessing LLMs' rational reasoning capabilities, particularly in critical applications. The paper contributes methodologically by providing a framework to evaluate and compare LLMs' rational reasoning abilities, using tasks originally designed for human subjects. The results suggest that LLMs do not fail in the same ways as humans, and their performance on mathematical tasks is generally lower than on non-mathematical ones. The paper concludes by discussing the implications for safety and the potential for further research in this area.
Reach us at info@study.space