This survey provides an in-depth analysis of knowledge conflicts in large language models (LLMs), focusing on three categories: context-memory, inter-context, and intra-memory conflicts. These conflicts can significantly impact the trustworthiness and performance of LLMs, especially in real-world applications where noise and misinformation are common. The survey categorizes these conflicts, explores their causes, examines the behaviors of LLMs under such conflicts, and reviews available solutions to improve the robustness of LLMs.
**Context-Memory Conflict:**
- **Causes:** Temporal misalignment and misinformation pollution.
- **Model Behaviors:** LLMs tend to rely on parametric knowledge or contextual knowledge, depending on the scenario.
- **Solutions:** Fine-tuning, prompting, decoding, knowledge plug-in, pre-training, and fact validity prediction.
**Inter-Context Conflict:**
- **Causes:** Misinformation and outdated information.
- **Model Behaviors:** LLMs exhibit biases towards relevant context and parametric knowledge.
- **Solutions:** Specialized models, general models, training approaches, query augmentation, and improving robustness.
**Intra-Memory Conflict:**
- **Causes:** Training corpus bias, decoding strategies, and knowledge editing.
- **Model Behaviors:** Inconsistent responses to semantically identical queries.
- **Solutions:** Fine-tuning, plug-in, output ensemble, and improving consistency and factuality.
The survey highlights the need for more fine-grained approaches to address knowledge conflicts, considering factors such as user queries, source of conflicting information, and user expectations. It also emphasizes the importance of evaluating LLMs' performance on a broader range of downstream tasks to create more robust and reliable models.This survey provides an in-depth analysis of knowledge conflicts in large language models (LLMs), focusing on three categories: context-memory, inter-context, and intra-memory conflicts. These conflicts can significantly impact the trustworthiness and performance of LLMs, especially in real-world applications where noise and misinformation are common. The survey categorizes these conflicts, explores their causes, examines the behaviors of LLMs under such conflicts, and reviews available solutions to improve the robustness of LLMs.
**Context-Memory Conflict:**
- **Causes:** Temporal misalignment and misinformation pollution.
- **Model Behaviors:** LLMs tend to rely on parametric knowledge or contextual knowledge, depending on the scenario.
- **Solutions:** Fine-tuning, prompting, decoding, knowledge plug-in, pre-training, and fact validity prediction.
**Inter-Context Conflict:**
- **Causes:** Misinformation and outdated information.
- **Model Behaviors:** LLMs exhibit biases towards relevant context and parametric knowledge.
- **Solutions:** Specialized models, general models, training approaches, query augmentation, and improving robustness.
**Intra-Memory Conflict:**
- **Causes:** Training corpus bias, decoding strategies, and knowledge editing.
- **Model Behaviors:** Inconsistent responses to semantically identical queries.
- **Solutions:** Fine-tuning, plug-in, output ensemble, and improving consistency and factuality.
The survey highlights the need for more fine-grained approaches to address knowledge conflicts, considering factors such as user queries, source of conflicting information, and user expectations. It also emphasizes the importance of evaluating LLMs' performance on a broader range of downstream tasks to create more robust and reliable models.