2024 | Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer
ReadAgent is a novel LLM agent system designed to address the limitations of current Large Language Models (LLMs) in handling very long inputs. Inspired by human reading behavior, ReadAgent increases the effective context length up to 20 times by implementing three key steps: episode pagination, memory gisting, and interactive look-up. Episode pagination involves dividing the long text into manageable chunks, memory gisting compresses these chunks into shorter *gist memories*, and interactive look-up allows the LLM to retrieve relevant details from the original text as needed. Evaluations on three long-document reading comprehension tasks—QuALITY, NarrativeQA, and QMSum—show that ReadAgent outperforms baselines while significantly extending the effective context window. The approach demonstrates the potential of LLMs in reasoning over long contexts and highlights the importance of interactive and compressed representations for effective task performance.ReadAgent is a novel LLM agent system designed to address the limitations of current Large Language Models (LLMs) in handling very long inputs. Inspired by human reading behavior, ReadAgent increases the effective context length up to 20 times by implementing three key steps: episode pagination, memory gisting, and interactive look-up. Episode pagination involves dividing the long text into manageable chunks, memory gisting compresses these chunks into shorter *gist memories*, and interactive look-up allows the LLM to retrieve relevant details from the original text as needed. Evaluations on three long-document reading comprehension tasks—QuALITY, NarrativeQA, and QMSum—show that ReadAgent outperforms baselines while significantly extending the effective context window. The approach demonstrates the potential of LLMs in reasoning over long contexts and highlights the importance of interactive and compressed representations for effective task performance.