Retrieval Head Mechanistically Explains Long-Context Factuality

Retrieval Head Mechanistically Explains Long-Context Factuality

24 Apr 2024 | Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao Peng, Yao Fu
This paper investigates the internal mechanisms of long-context language models, focusing on how they retrieve relevant information from arbitrary locations within the input. The authors identify a special type of attention heads, termed *retrieval heads*, which are responsible for this information retrieval. These heads are found to be universal, sparse, intrinsic, dynamically activated, and causal. Key findings include: 1. **Universality and Sparsity**: All models capable of long-context processing exhibit a small set of retrieval heads. 2. **Intrinsiveness**: Retrieval heads are present in base models and are not significantly altered by subsequent fine-tuning or derivations. 3. **Dynamic Activation**: Retrieval heads are activated based on the context, with strong heads always attending to the required information and weaker heads attending to different parts. 4. **Causality**: Masking retrieval heads leads to hallucination, while masking random non-retrieval heads has minimal impact. The authors also demonstrate that retrieval heads significantly influence downstream tasks such as extractive question answering and chain-of-thought reasoning. They propose that understanding these heads could lead to improvements in reducing hallucination, enhancing reasoning, and compressing the KV cache, which is crucial for deploying long-context models efficiently.This paper investigates the internal mechanisms of long-context language models, focusing on how they retrieve relevant information from arbitrary locations within the input. The authors identify a special type of attention heads, termed *retrieval heads*, which are responsible for this information retrieval. These heads are found to be universal, sparse, intrinsic, dynamically activated, and causal. Key findings include: 1. **Universality and Sparsity**: All models capable of long-context processing exhibit a small set of retrieval heads. 2. **Intrinsiveness**: Retrieval heads are present in base models and are not significantly altered by subsequent fine-tuning or derivations. 3. **Dynamic Activation**: Retrieval heads are activated based on the context, with strong heads always attending to the required information and weaker heads attending to different parts. 4. **Causality**: Masking retrieval heads leads to hallucination, while masking random non-retrieval heads has minimal impact. The authors also demonstrate that retrieval heads significantly influence downstream tasks such as extractive question answering and chain-of-thought reasoning. They propose that understanding these heads could lead to improvements in reducing hallucination, enhancing reasoning, and compressing the KV cache, which is crucial for deploying long-context models efficiently.
Reach us at info@study.space