The paper "Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback" addresses the issue of hallucinations in Large Vision Language Models (LVMs), where generated texts do not align with given contexts. The authors propose a method to detect and mitigate hallucinations using fine-grained AI feedback from proprietary models like GPT-4 and GPT-4V. The key contributions include:
1. **Fine-Grained AI Feedback**: A small-size sentence-level hallucination annotation dataset is generated by proprietary models, which is used to train a hallucination detection model capable of identifying hallucinations at the sentence level, covering object, attribute, and relationship hallucinations.
2. **Detect-Then-Rewrite Pipeline**: An automatic pipeline constructs preference datasets for training hallucination mitigation models. This pipeline involves identifying hallucinations in responses and rewriting them to non-hallucinatory versions, reducing annotation costs.
3. **Hallucination Severity-Aware Direct Preference Optimization (HSA-DPO)**: This method incorporates hallucination severity into preference learning, prioritizing the mitigation of critical hallucinations.
Experiments on various benchmarks demonstrate the effectiveness of the proposed method, showing state-of-the-art results in hallucination detection and significant improvements in hallucination mitigation. The method reduces hallucination rates by up to 76.3% on Object HalBench and 36.1% on AMBER, outperforming competitive models. The paper also highlights the importance of fine-grained feedback and the practicality of the proposed pipeline for large-scale preference dataset construction.The paper "Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback" addresses the issue of hallucinations in Large Vision Language Models (LVMs), where generated texts do not align with given contexts. The authors propose a method to detect and mitigate hallucinations using fine-grained AI feedback from proprietary models like GPT-4 and GPT-4V. The key contributions include:
1. **Fine-Grained AI Feedback**: A small-size sentence-level hallucination annotation dataset is generated by proprietary models, which is used to train a hallucination detection model capable of identifying hallucinations at the sentence level, covering object, attribute, and relationship hallucinations.
2. **Detect-Then-Rewrite Pipeline**: An automatic pipeline constructs preference datasets for training hallucination mitigation models. This pipeline involves identifying hallucinations in responses and rewriting them to non-hallucinatory versions, reducing annotation costs.
3. **Hallucination Severity-Aware Direct Preference Optimization (HSA-DPO)**: This method incorporates hallucination severity into preference learning, prioritizing the mitigation of critical hallucinations.
Experiments on various benchmarks demonstrate the effectiveness of the proposed method, showing state-of-the-art results in hallucination detection and significant improvements in hallucination mitigation. The method reduces hallucination rates by up to 76.3% on Object HalBench and 36.1% on AMBER, outperforming competitive models. The paper also highlights the importance of fine-grained feedback and the practicality of the proposed pipeline for large-scale preference dataset construction.