A Survey on Large Language Model Hallucination via a Creativity Perspective

A Survey on Large Language Model Hallucination via a Creativity Perspective

2 Feb 2024 | Xuhui Jiang, Yuxing Tian, Fengrui Hua, Chengjin Xu, Yuanzhuo Wang, Jian Guo
This paper explores the potential of hallucinations in large language models (LLMs) as a source of creativity, challenging the traditional view that hallucinations are solely detrimental. It begins by reviewing the taxonomy of hallucinations, their negative impact on LLM reliability, and their potential creative benefits. The paper then examines the definitions and assessment of creativity, focusing on divergent and convergent thinking phases. It systematically reviews literature on harnessing hallucinations for creativity in LLMs and discusses future research directions, emphasizing the need to explore and refine the application of hallucinations in creative processes. Hallucinations in LLMs are categorized into factuality and faithfulness types, with factuality hallucinations involving factual inconsistencies or fabrication, and faithfulness hallucinations involving instruction or context inconsistencies. Detection methods include external fact retrieval, uncertainty estimation, and classification-based metrics. Reduction strategies involve model training, reinforcement learning, and knowledge graph augmentation. The paper questions the negative perception of hallucinations, suggesting that they may hold creative potential. Historical examples, such as the shift from geocentric to heliocentric models, and accidental discoveries like penicillin, illustrate how hallucinations can lead to novel ideas. Cognitive science research indicates that creativity involves both divergent and convergent thinking, with hallucinations potentially contributing to creative processes. The paper explores creativity in LLMs, defining it through cognitive science perspectives and discussing measurement approaches. Recent studies show that LLMs can exhibit creativity, with some research proposing that they can be as creative as humans under certain conditions. The paper also discusses methods for harnessing hallucinations for creativity, including divergent and convergent phases, and highlights the need for further research to develop comprehensive evaluation metrics and benchmarks. Future research directions include deeper theoretical exploration, richer datasets, optimization of method designs, and exploration of more application scenarios. The paper concludes that hallucinations in LLMs can be a valuable resource for creativity, and further research is needed to fully understand and utilize this potential.This paper explores the potential of hallucinations in large language models (LLMs) as a source of creativity, challenging the traditional view that hallucinations are solely detrimental. It begins by reviewing the taxonomy of hallucinations, their negative impact on LLM reliability, and their potential creative benefits. The paper then examines the definitions and assessment of creativity, focusing on divergent and convergent thinking phases. It systematically reviews literature on harnessing hallucinations for creativity in LLMs and discusses future research directions, emphasizing the need to explore and refine the application of hallucinations in creative processes. Hallucinations in LLMs are categorized into factuality and faithfulness types, with factuality hallucinations involving factual inconsistencies or fabrication, and faithfulness hallucinations involving instruction or context inconsistencies. Detection methods include external fact retrieval, uncertainty estimation, and classification-based metrics. Reduction strategies involve model training, reinforcement learning, and knowledge graph augmentation. The paper questions the negative perception of hallucinations, suggesting that they may hold creative potential. Historical examples, such as the shift from geocentric to heliocentric models, and accidental discoveries like penicillin, illustrate how hallucinations can lead to novel ideas. Cognitive science research indicates that creativity involves both divergent and convergent thinking, with hallucinations potentially contributing to creative processes. The paper explores creativity in LLMs, defining it through cognitive science perspectives and discussing measurement approaches. Recent studies show that LLMs can exhibit creativity, with some research proposing that they can be as creative as humans under certain conditions. The paper also discusses methods for harnessing hallucinations for creativity, including divergent and convergent phases, and highlights the need for further research to develop comprehensive evaluation metrics and benchmarks. Future research directions include deeper theoretical exploration, richer datasets, optimization of method designs, and exploration of more application scenarios. The paper concludes that hallucinations in LLMs can be a valuable resource for creativity, and further research is needed to fully understand and utilize this potential.
Reach us at info@study.space
[slides] A Survey on Large Language Model Hallucination via a Creativity Perspective | StudySpace