Grounding Language Model with Chunking-Free In-Context Retrieval

Grounding Language Model with Chunking-Free In-Context Retrieval

15 Feb 2024 | Hongjin Qian, Zheng Liu, Kelong Mao, Yujia Zhou, Zhicheng Dou
This paper introduces a novel Chunking-Free In-Context (CFIC) retrieval approach tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Common solutions, such as document chunking and adapting language models to handle longer contexts, have limitations, including disrupting semantic coherence or failing to address noise and inaccuracy in evidence retrieval. CFIC addresses these challenges by bypassing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing auto-aggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two decoding strategies: Constrained Sentence Prefix Decoding and Skip Decoding. These strategies improve the efficiency of the retrieval process and ensure the fidelity of the generated grounding text evidence. The paper evaluates CFIC on a range of open QA datasets, demonstrating its superiority in retrieving relevant and accurate evidence compared to traditional methods. By removing the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems.This paper introduces a novel Chunking-Free In-Context (CFIC) retrieval approach tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Common solutions, such as document chunking and adapting language models to handle longer contexts, have limitations, including disrupting semantic coherence or failing to address noise and inaccuracy in evidence retrieval. CFIC addresses these challenges by bypassing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing auto-aggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two decoding strategies: Constrained Sentence Prefix Decoding and Skip Decoding. These strategies improve the efficiency of the retrieval process and ensure the fidelity of the generated grounding text evidence. The paper evaluates CFIC on a range of open QA datasets, demonstrating its superiority in retrieving relevant and accurate evidence compared to traditional methods. By removing the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems.
Reach us at info@study.space
Understanding Grounding Language Model with Chunking-Free In-Context Retrieval