Reducing hallucination in structured outputs via Retrieval-Augmented Generation

Reducing hallucination in structured outputs via Retrieval-Augmented Generation

12 Apr 2024 | Patrice Béchard, Orlando Marquez Ayala
The paper "Reducing hallucination in structured outputs via Retrieval-Augmented Generation" addresses the issue of hallucination in Generative AI (GenAI) systems, particularly in the context of converting natural language requirements into workflows. The authors, Patrice Béchard and Orlando Marquez Ayala from ServiceNow, propose a system that leverages Retrieval-Augmented Generation (RAG) to improve the quality and reliability of structured outputs. By integrating a retriever model with a Large Language Model (LLM), the system reduces hallucination and enhances the generalization of the LLM to out-of-domain settings. The key contributions of the work include: 1. **Application of RAG in Workflow Generation**: The authors apply RAG to the task of generating structured outputs, specifically JSON documents representing workflows. 2. **Reduction of Hallucination**: The system significantly reduces hallucination, which is the generation of non-existent steps or tables in the output. 3. **Performance with a Small LLM**: The use of a small, well-trained retriever model allows for the deployment of a smaller LLM without compromising performance. The paper also discusses the methodology, including the training of the retriever and LLM, and evaluates the system using various metrics such as Trigger Exact Match, Bag of Steps, and Hallucinated Tables. The results show that the RAG approach effectively reduces hallucination and improves the overall performance of the system, even with limited computational resources. The authors conclude by highlighting the importance of reducing hallucination for the adoption of real-world GenAI systems and suggest future work on improving the synergy between the retriever and LLM.The paper "Reducing hallucination in structured outputs via Retrieval-Augmented Generation" addresses the issue of hallucination in Generative AI (GenAI) systems, particularly in the context of converting natural language requirements into workflows. The authors, Patrice Béchard and Orlando Marquez Ayala from ServiceNow, propose a system that leverages Retrieval-Augmented Generation (RAG) to improve the quality and reliability of structured outputs. By integrating a retriever model with a Large Language Model (LLM), the system reduces hallucination and enhances the generalization of the LLM to out-of-domain settings. The key contributions of the work include: 1. **Application of RAG in Workflow Generation**: The authors apply RAG to the task of generating structured outputs, specifically JSON documents representing workflows. 2. **Reduction of Hallucination**: The system significantly reduces hallucination, which is the generation of non-existent steps or tables in the output. 3. **Performance with a Small LLM**: The use of a small, well-trained retriever model allows for the deployment of a smaller LLM without compromising performance. The paper also discusses the methodology, including the training of the retriever and LLM, and evaluates the system using various metrics such as Trigger Exact Match, Bag of Steps, and Hallucinated Tables. The results show that the RAG approach effectively reduces hallucination and improves the overall performance of the system, even with limited computational resources. The authors conclude by highlighting the importance of reducing hallucination for the adoption of real-world GenAI systems and suggest future work on improving the synergy between the retriever and LLM.
Reach us at info@study.space