This comprehensive survey explores the phenomenon of "hallucination" in Large Vision-Language Models (LVLMs), which refers to the misalignment between factual visual content and corresponding textual generation. The survey begins by clarifying the concept of hallucinations in LVLMs, presenting various symptoms and highlighting unique challenges. It then outlines benchmarks and methodologies for evaluating hallucinations in LVLMs, including both non-hallucinatory generation and hallucination discrimination. The root causes of hallucinations are analyzed, encompassing insights from training data and model components. Existing methods for mitigating hallucinations are critically reviewed, and open questions and future directions are discussed. The survey aims to provide insights for the development of LVLMs and explore opportunities and challenges related to hallucinations. It concludes with a detailed analysis of the causes of hallucinations, including data bias, annotation irrelevance, limitations of vision encoders, and issues with modality alignment. Various mitigation methods are also discussed, focusing on optimizing training data, refining model components, and post-processing techniques. The survey emphasizes the need for more detailed supervision objectives, enriching modalities, and improving interpretability to address hallucinations in LVLMs.This comprehensive survey explores the phenomenon of "hallucination" in Large Vision-Language Models (LVLMs), which refers to the misalignment between factual visual content and corresponding textual generation. The survey begins by clarifying the concept of hallucinations in LVLMs, presenting various symptoms and highlighting unique challenges. It then outlines benchmarks and methodologies for evaluating hallucinations in LVLMs, including both non-hallucinatory generation and hallucination discrimination. The root causes of hallucinations are analyzed, encompassing insights from training data and model components. Existing methods for mitigating hallucinations are critically reviewed, and open questions and future directions are discussed. The survey aims to provide insights for the development of LVLMs and explore opportunities and challenges related to hallucinations. It concludes with a detailed analysis of the causes of hallucinations, including data bias, annotation irrelevance, limitations of vision encoders, and issues with modality alignment. Various mitigation methods are also discussed, focusing on optimizing training data, refining model components, and post-processing techniques. The survey emphasizes the need for more detailed supervision objectives, enriching modalities, and improving interpretability to address hallucinations in LVLMs.