8 Jan 2024 | S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, Amitava Das
This paper presents a comprehensive survey of over thirty-two techniques aimed at mitigating hallucination in Large Language Models (LLMs). Hallucination, the generation of factually incorrect or ungrounded content, is a significant challenge for LLMs, particularly in sensitive applications such as medical records, customer support, and financial analysis. The paper introduces a detailed taxonomy categorizing these methods based on parameters like dataset utilization, common tasks, feedback mechanisms, and retriever types. Notable techniques include Retrieval-Augmented Generation (RAG), Knowledge Retrieval, CoNLI, and CoVe. The taxonomy helps distinguish diverse approaches designed to tackle hallucination issues in LLMs. The paper also analyzes the challenges and limitations of these techniques, providing a foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs. Key contributions include a systematic taxonomy, synthesis of essential features, and deliberation on limitations and future research directions. The paper emphasizes the importance of addressing hallucination due to LLMs' widespread impact across various domains and their role in critical tasks.This paper presents a comprehensive survey of over thirty-two techniques aimed at mitigating hallucination in Large Language Models (LLMs). Hallucination, the generation of factually incorrect or ungrounded content, is a significant challenge for LLMs, particularly in sensitive applications such as medical records, customer support, and financial analysis. The paper introduces a detailed taxonomy categorizing these methods based on parameters like dataset utilization, common tasks, feedback mechanisms, and retriever types. Notable techniques include Retrieval-Augmented Generation (RAG), Knowledge Retrieval, CoNLI, and CoVe. The taxonomy helps distinguish diverse approaches designed to tackle hallucination issues in LLMs. The paper also analyzes the challenges and limitations of these techniques, providing a foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs. Key contributions include a systematic taxonomy, synthesis of essential features, and deliberation on limitations and future research directions. The paper emphasizes the importance of addressing hallucination due to LLMs' widespread impact across various domains and their role in critical tasks.