Knowledge Mechanisms in Large Language Models: A Survey and Perspective

Knowledge Mechanisms in Large Language Models: A Survey and Perspective

31 Jul 2024 | Mengru Wang, Yunzhi Yao, Ziwen Xu, Shuofei Qiao, Shumin Deng, Peng Wang, Xiang Chen, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang
This paper reviews knowledge mechanisms in Large Language Models (LLMs) from a novel taxonomy, including knowledge utilization and evolution. Knowledge utilization focuses on memorization, comprehension, application, and creation, while knowledge evolution examines the dynamic progression of knowledge in individual and group LLMs. The paper discusses what knowledge LLMs have learned, the fragility of parametric knowledge, and potential dark knowledge (hypothesis) that remains challenging to address. It proposes a new taxonomy for knowledge mechanisms in LLMs, covering knowledge utilization at a specific time and knowledge evolution across all periods. The paper also explores how to construct more efficient and trustworthy LLMs from the perspective of knowledge mechanisms, discusses open questions about knowledge that LLMs have and have not acquired, and provides future directions and tools for knowledge mechanism analysis. The paper highlights the importance of understanding knowledge in LLMs for future research and development. The contributions include being the first to review knowledge mechanisms in LLMs and providing a novel taxonomy across the entire life cycle. The paper also proposes a new perspective to analyze knowledge utilization mechanisms from three levels: memorization, comprehension, and application, and creation. It discusses knowledge evolution in individual and group LLMs, analyzing inherent conflicts and integration in this process. The paper observes that LLMs have learned basic world knowledge, but the learned knowledge is fragile, leading to challenges such as hallucinations and knowledge conflicts. It speculates that this fragility may be primarily due to improper learning data. The paper also discusses the dark knowledge not yet learned by machines or humans and explores how LLMs can expand the boundaries of unknown knowledge from interdisciplinary perspectives. The paper concludes that LLMs have learned basic knowledge of the world by memorization, but the learned knowledge is fragile, leading to challenges in knowledge comprehension and application. Current LLMs struggle with creation due to architectural limitations. The paper suggests that the fragility of learned knowledge may be attributed to improper learning data, and that data quantity is crucial for knowledge robustness. The paper also discusses the challenges of knowledge conflicts and hallucinations in LLMs, and the need for more effective strategies to address these issues.This paper reviews knowledge mechanisms in Large Language Models (LLMs) from a novel taxonomy, including knowledge utilization and evolution. Knowledge utilization focuses on memorization, comprehension, application, and creation, while knowledge evolution examines the dynamic progression of knowledge in individual and group LLMs. The paper discusses what knowledge LLMs have learned, the fragility of parametric knowledge, and potential dark knowledge (hypothesis) that remains challenging to address. It proposes a new taxonomy for knowledge mechanisms in LLMs, covering knowledge utilization at a specific time and knowledge evolution across all periods. The paper also explores how to construct more efficient and trustworthy LLMs from the perspective of knowledge mechanisms, discusses open questions about knowledge that LLMs have and have not acquired, and provides future directions and tools for knowledge mechanism analysis. The paper highlights the importance of understanding knowledge in LLMs for future research and development. The contributions include being the first to review knowledge mechanisms in LLMs and providing a novel taxonomy across the entire life cycle. The paper also proposes a new perspective to analyze knowledge utilization mechanisms from three levels: memorization, comprehension, and application, and creation. It discusses knowledge evolution in individual and group LLMs, analyzing inherent conflicts and integration in this process. The paper observes that LLMs have learned basic world knowledge, but the learned knowledge is fragile, leading to challenges such as hallucinations and knowledge conflicts. It speculates that this fragility may be primarily due to improper learning data. The paper also discusses the dark knowledge not yet learned by machines or humans and explores how LLMs can expand the boundaries of unknown knowledge from interdisciplinary perspectives. The paper concludes that LLMs have learned basic knowledge of the world by memorization, but the learned knowledge is fragile, leading to challenges in knowledge comprehension and application. Current LLMs struggle with creation due to architectural limitations. The paper suggests that the fragility of learned knowledge may be attributed to improper learning data, and that data quantity is crucial for knowledge robustness. The paper also discusses the challenges of knowledge conflicts and hallucinations in LLMs, and the need for more effective strategies to address these issues.
Reach us at info@study.space