31 Mar 2024 | Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, Jie Fu
The paper introduces RQ-RAG (Learning to Refine Queries for Retrieval Augmented Generation), a framework that enhances Large Language Models (LLMs) by training them to refine queries through rewriting, decomposing, and disambiguating. The authors address the limitations of LLMs, which are prone to generating inaccurate or hallucinatory responses due to their reliance on vast pretraining datasets. RAG addresses this by incorporating external, relevant documents into the response generation process, leveraging non-parametric knowledge alongside LLMs' in-context learning abilities.
RQ-RAG focuses on improving the handling of ambiguous or complex queries, which require further clarification or decomposition for accurate responses. The method is evaluated on three single-hop QA datasets and three multi-hop QA datasets, demonstrating superior performance compared to previous state-of-the-art methods. The experimental results show that RQ-RAG outperforms the previous best method by an average of 1.9% across the single-hop QA datasets and also shows enhanced performance in handling complex, multi-hop QA datasets.
The authors highlight the effectiveness of regenerating responses based on search results during data construction, moving beyond simply using the original dataset outputs. They also demonstrate the system's resilience to different data sources and its high upper bound, indicating its potential for further improvement. The paper includes detailed descriptions of the dataset construction, generator training, and sampling strategies, along with comprehensive experimental results and analysis.The paper introduces RQ-RAG (Learning to Refine Queries for Retrieval Augmented Generation), a framework that enhances Large Language Models (LLMs) by training them to refine queries through rewriting, decomposing, and disambiguating. The authors address the limitations of LLMs, which are prone to generating inaccurate or hallucinatory responses due to their reliance on vast pretraining datasets. RAG addresses this by incorporating external, relevant documents into the response generation process, leveraging non-parametric knowledge alongside LLMs' in-context learning abilities.
RQ-RAG focuses on improving the handling of ambiguous or complex queries, which require further clarification or decomposition for accurate responses. The method is evaluated on three single-hop QA datasets and three multi-hop QA datasets, demonstrating superior performance compared to previous state-of-the-art methods. The experimental results show that RQ-RAG outperforms the previous best method by an average of 1.9% across the single-hop QA datasets and also shows enhanced performance in handling complex, multi-hop QA datasets.
The authors highlight the effectiveness of regenerating responses based on search results during data construction, moving beyond simply using the original dataset outputs. They also demonstrate the system's resilience to different data sources and its high upper bound, indicating its potential for further improvement. The paper includes detailed descriptions of the dataset construction, generator training, and sampling strategies, along with comprehensive experimental results and analysis.