A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models

A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models

17 Jun 2024 | Wenqi Fan, Yujuan Ding*, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, Qing Li
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models Retrieval-Augmented Generation (RAG) has emerged as a key technique in enhancing the performance of Large Language Models (LLMs) by integrating external knowledge. This survey provides a comprehensive overview of Retrieval-Augmented Large Language Models (RA-LLMs), covering three main technical perspectives: architectures, training strategies, and applications. The survey begins with an introduction to LLMs and prompt learning, followed by a detailed review of RA-LLMs in terms of retrieval, generation, and augmentation. It discusses the necessity and frequency of retrieval in RAG, as well as the main training techniques and various applications of RA-LLMs. The survey also highlights the challenges and potential directions for future research in RA-LLMs. The survey is accompanied by a comparison of different training methods in RA-LLMs, including training-free, independent training, sequential training, and joint training approaches. The survey concludes with a discussion of the key challenges and potential directions for future research in RA-LLMs.A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models Retrieval-Augmented Generation (RAG) has emerged as a key technique in enhancing the performance of Large Language Models (LLMs) by integrating external knowledge. This survey provides a comprehensive overview of Retrieval-Augmented Large Language Models (RA-LLMs), covering three main technical perspectives: architectures, training strategies, and applications. The survey begins with an introduction to LLMs and prompt learning, followed by a detailed review of RA-LLMs in terms of retrieval, generation, and augmentation. It discusses the necessity and frequency of retrieval in RAG, as well as the main training techniques and various applications of RA-LLMs. The survey also highlights the challenges and potential directions for future research in RA-LLMs. The survey is accompanied by a comparison of different training methods in RA-LLMs, including training-free, independent training, sequential training, and joint training approaches. The survey concludes with a discussion of the key challenges and potential directions for future research in RA-LLMs.
Reach us at info@study.space