Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases

Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases

15 Mar 2024 | Jiarui Li*, Ye Yuan*, Zehua Zhang*
This paper presents an end-to-end system designed to enhance the factual accuracy of Large Language Models (LLMs) for domain-specific and time-sensitive queries using Retrieval Augmented Generation (RAG). The system integrates an RAG pipeline with upstream dataset processing and downstream performance evaluation. To address the issue of LLM hallucinations, the authors fine-tune models using a curated dataset from Carnegie Mellon University (CMU) and the Language Technology Institute (LTI), which is annotated with a teacher model. The experiments demonstrate the system's effectiveness in generating more accurate answers to specific and time-sensitive inquiries, while also highlighting the limitations of fine-tuning LLMs with small-scale and skewed datasets. The research underscores the potential of RAG systems in improving LLM performance in knowledge-intensive tasks by leveraging external datasets. The code and models are available on GitHub.This paper presents an end-to-end system designed to enhance the factual accuracy of Large Language Models (LLMs) for domain-specific and time-sensitive queries using Retrieval Augmented Generation (RAG). The system integrates an RAG pipeline with upstream dataset processing and downstream performance evaluation. To address the issue of LLM hallucinations, the authors fine-tune models using a curated dataset from Carnegie Mellon University (CMU) and the Language Technology Institute (LTI), which is annotated with a teacher model. The experiments demonstrate the system's effectiveness in generating more accurate answers to specific and time-sensitive inquiries, while also highlighting the limitations of fine-tuning LLMs with small-scale and skewed datasets. The research underscores the potential of RAG systems in improving LLM performance in knowledge-intensive tasks by leveraging external datasets. The code and models are available on GitHub.
Reach us at info@study.space