Confabulation: The Surprising Value of Large Language Model Hallucinations

Confabulation: The Surprising Value of Large Language Model Hallucinations

Forthcoming at ACL2024 | Peiqi Sui, Eamon Duede, Sophie Wu, Richard Jean So
This paper argues that large language model (LLM) hallucinations, or 'confabulations', should be viewed as a potential resource rather than a negative flaw. While traditionally seen as a harmful issue, the authors show that LLM confabulations exhibit increased narrativity and semantic coherence compared to factual outputs. This suggests that confabulations may have value in enabling coherent communication and sense-making, aligning with human tendencies to use narratives as cognitive tools. The paper presents empirical evidence from three benchmarks showing that hallucinated outputs have higher narrativity than factual ones. It also explores the broader implications of confabulation, arguing that it reflects a human tendency to use narratives for communication and understanding. The paper challenges the traditional view that hallucinations are inherently harmful, suggesting that they may be a natural byproduct of LLMs' ability to generate coherent, narrative-rich text. The authors propose a narrative-centric definition of confabulation, emphasizing its potential benefits in human communication and cognitive processing. They also discuss the importance of considering confabulation in the broader context of AI research, highlighting its potential applications in various domains. The paper concludes that confabulation should be viewed as a valuable resource rather than a flaw, offering a more flexible framework for understanding and analyzing LLM outputs.This paper argues that large language model (LLM) hallucinations, or 'confabulations', should be viewed as a potential resource rather than a negative flaw. While traditionally seen as a harmful issue, the authors show that LLM confabulations exhibit increased narrativity and semantic coherence compared to factual outputs. This suggests that confabulations may have value in enabling coherent communication and sense-making, aligning with human tendencies to use narratives as cognitive tools. The paper presents empirical evidence from three benchmarks showing that hallucinated outputs have higher narrativity than factual ones. It also explores the broader implications of confabulation, arguing that it reflects a human tendency to use narratives for communication and understanding. The paper challenges the traditional view that hallucinations are inherently harmful, suggesting that they may be a natural byproduct of LLMs' ability to generate coherent, narrative-rich text. The authors propose a narrative-centric definition of confabulation, emphasizing its potential benefits in human communication and cognitive processing. They also discuss the importance of considering confabulation in the broader context of AI research, highlighting its potential applications in various domains. The paper concludes that confabulation should be viewed as a valuable resource rather than a flaw, offering a more flexible framework for understanding and analyzing LLM outputs.
Reach us at info@study.space
[slides and audio] Confabulation%3A The Surprising Value of Large Language Model Hallucinations