RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation

RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation

March 2024 | Zihao Wang, Anji Liu, Haowei Lin, Jiaqi Li, Xiaojian Ma and Yitao Liang
RAT (Retrieval Augmented Thoughts) is a prompting strategy that enhances large language models (LLMs) in long-horizon generation tasks by integrating retrieval-augmented generation (RAG) with chain-of-thought (CoT) prompting. The method iteratively revises each thought step using retrieved information relevant to the task, current and past thoughts, after the initial zero-shot CoT is generated. This approach significantly improves the accuracy and reliability of LLM outputs, reducing hallucinations. RAT has been tested on various tasks, including code generation, mathematical reasoning, embodied task planning, and creative writing, showing substantial improvements. For example, on code generation, RAT improved performance by 13.63% on average, while on mathematical reasoning, it increased accuracy by 16.96%. The method also outperforms other baselines in embodied planning and creative writing tasks. RAT's effectiveness is attributed to its ability to iteratively refine reasoning steps using external knowledge, ensuring each step is informed by accurate and relevant information. The approach is robust across different LLM scales and demonstrates strong generalization capabilities. The study highlights the importance of combining retrieval and reasoning to enhance LLM performance in complex tasks.RAT (Retrieval Augmented Thoughts) is a prompting strategy that enhances large language models (LLMs) in long-horizon generation tasks by integrating retrieval-augmented generation (RAG) with chain-of-thought (CoT) prompting. The method iteratively revises each thought step using retrieved information relevant to the task, current and past thoughts, after the initial zero-shot CoT is generated. This approach significantly improves the accuracy and reliability of LLM outputs, reducing hallucinations. RAT has been tested on various tasks, including code generation, mathematical reasoning, embodied task planning, and creative writing, showing substantial improvements. For example, on code generation, RAT improved performance by 13.63% on average, while on mathematical reasoning, it increased accuracy by 16.96%. The method also outperforms other baselines in embodied planning and creative writing tasks. RAT's effectiveness is attributed to its ability to iteratively refine reasoning steps using external knowledge, ensuring each step is informed by accurate and relevant information. The approach is robust across different LLM scales and demonstrates strong generalization capabilities. The study highlights the importance of combining retrieval and reasoning to enhance LLM performance in complex tasks.
Reach us at info@study.space
[slides] RAT%3A Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation | StudySpace