Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity

Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity

28 Mar 2024 | Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
**Abstract:** Retrieval-Augmented Large Language Models (LLMs) enhance response accuracy by incorporating external knowledge bases. However, existing approaches either handle simple queries inefficiently or fail to address complex multi-step queries adequately. This work proposes Adaptive-RAG, a novel framework that dynamically selects the most suitable strategy for LLMs based on query complexity. A classifier, trained to predict query complexity, operationalizes this selection process. The approach balances between iterative and single-step retrieval-augmented methods and no-retrieval approaches, enhancing efficiency and accuracy across various query complexities. Validation on open-domain QA datasets shows improved performance compared to adaptive retrieval baselines. **Introduction:** Recent LLMs excel in diverse tasks, including QA, but still generate incorrect answers due to limited parametric memory. Retrieval-augmented LLMs address this by incorporating external knowledge, enhancing accuracy and currency. Single-hop and multi-hop QA approaches are common, but they often over-simplify or over-complexify queries. Adaptive-RAG dynamically adjusts strategies based on query complexity, using a classifier to determine the most appropriate approach. This method balances efficiency and accuracy, outperforming existing adaptive strategies. **Related Work:** Open-domain QA involves retrieving and interpreting relevant documents. Multi-hop QA requires iterative reasoning over multiple documents. Adaptive retrieval strategies decide whether to retrieve documents based on query complexity. Previous adaptive methods are either overly simplistic or complex, lacking fine-grained complexity handling. **Method:** Adaptive-RAG adapts retrieval-augmented LLMs by pre-determining query complexity and selecting the most suitable strategy. The classifier predicts query complexity, trained with automatically collected labels from model predictions and dataset biases. Experimental results show improved accuracy and efficiency compared to existing methods. **Conclusion:** Adaptive-RAG enhances QA systems by dynamically adjusting strategies based on query complexity, balancing efficiency and accuracy. Future work could improve classifier performance and training datasets.**Abstract:** Retrieval-Augmented Large Language Models (LLMs) enhance response accuracy by incorporating external knowledge bases. However, existing approaches either handle simple queries inefficiently or fail to address complex multi-step queries adequately. This work proposes Adaptive-RAG, a novel framework that dynamically selects the most suitable strategy for LLMs based on query complexity. A classifier, trained to predict query complexity, operationalizes this selection process. The approach balances between iterative and single-step retrieval-augmented methods and no-retrieval approaches, enhancing efficiency and accuracy across various query complexities. Validation on open-domain QA datasets shows improved performance compared to adaptive retrieval baselines. **Introduction:** Recent LLMs excel in diverse tasks, including QA, but still generate incorrect answers due to limited parametric memory. Retrieval-augmented LLMs address this by incorporating external knowledge, enhancing accuracy and currency. Single-hop and multi-hop QA approaches are common, but they often over-simplify or over-complexify queries. Adaptive-RAG dynamically adjusts strategies based on query complexity, using a classifier to determine the most appropriate approach. This method balances efficiency and accuracy, outperforming existing adaptive strategies. **Related Work:** Open-domain QA involves retrieving and interpreting relevant documents. Multi-hop QA requires iterative reasoning over multiple documents. Adaptive retrieval strategies decide whether to retrieve documents based on query complexity. Previous adaptive methods are either overly simplistic or complex, lacking fine-grained complexity handling. **Method:** Adaptive-RAG adapts retrieval-augmented LLMs by pre-determining query complexity and selecting the most suitable strategy. The classifier predicts query complexity, trained with automatically collected labels from model predictions and dataset biases. Experimental results show improved accuracy and efficiency compared to existing methods. **Conclusion:** Adaptive-RAG enhances QA systems by dynamically adjusting strategies based on query complexity, balancing efficiency and accuracy. Future work could improve classifier performance and training datasets.
Reach us at info@study.space