Hallucination is Inevitable: An Innate Limitation of Large Language Models

Hallucination is Inevitable: An Innate Limitation of Large Language Models

22 Jan 2024 | Ziwei Xu, Sanjay Jain, Mohan Kankanahalli
The paper "Hallucination is Inevitable: An Innate Limitation of Large Language Models" by Ziwei Xu, Sanjay Jain, and Mohan Kankanahalli from the National University of Singapore addresses the significant drawback of hallucination in large language models (LLMs). The authors formally define hallucination as inconsistencies between a computable LLM and a computable ground truth function, and show that it is impossible to eliminate hallucination in LLMs. They use learning theory to demonstrate that LLMs cannot learn all computable functions and will therefore always hallucinate. Since their formal world is a part of the real world, hallucinations are also inevitable for real-world LLMs. The paper identifies hallucination-prone tasks and empirically validates the theoretical results. It discusses the practical implications of these findings on the design of hallucination mitigators and the safe deployment of LLMs. The contributions of the paper include formalizing the problem of hallucination, showing its inevitability, and discussing the limitations and potential mitigators of LLMs. The authors conclude that all LLMs will hallucinate and that without proper controls, LLMs cannot be used for critical decision-making. They also emphasize the value of LLMs in enhancing productivity and the potential benefits of hallucinations in certain creative contexts.The paper "Hallucination is Inevitable: An Innate Limitation of Large Language Models" by Ziwei Xu, Sanjay Jain, and Mohan Kankanahalli from the National University of Singapore addresses the significant drawback of hallucination in large language models (LLMs). The authors formally define hallucination as inconsistencies between a computable LLM and a computable ground truth function, and show that it is impossible to eliminate hallucination in LLMs. They use learning theory to demonstrate that LLMs cannot learn all computable functions and will therefore always hallucinate. Since their formal world is a part of the real world, hallucinations are also inevitable for real-world LLMs. The paper identifies hallucination-prone tasks and empirically validates the theoretical results. It discusses the practical implications of these findings on the design of hallucination mitigators and the safe deployment of LLMs. The contributions of the paper include formalizing the problem of hallucination, showing its inevitability, and discussing the limitations and potential mitigators of LLMs. The authors conclude that all LLMs will hallucinate and that without proper controls, LLMs cannot be used for critical decision-making. They also emphasize the value of LLMs in enhancing productivity and the potential benefits of hallucinations in certain creative contexts.
Reach us at info@study.space
Understanding Hallucination is Inevitable%3A An Innate Limitation of Large Language Models