Ask Optimal Questions: Aligning Large Language Models with Retriever’s Preference in Conversational Search

Ask Optimal Questions: Aligning Large Language Models with Retriever’s Preference in Conversational Search

19 Feb 2024 | Chanwoong Yoon1*, Gangwoo Kim1*, Byeongguk Jeon1 Sungdong Kim2,3 Yohan Jo4 Jaewoo Kang1†
The paper introduces RetPO (Retriever's Preference Optimization), a novel framework designed to optimize a language model (LM) for reformulating search queries in line with the preferences of target retrieval systems. The process begins by prompting a large LM to produce various potential rewrites, followed by collecting retrieval performance as the retrievers' preferences. This results in a large-scale dataset called RF COLLECTION, containing over 410K query rewrites across 12K conversations. The dataset is then used to fine-tune a smaller LM, aligning it with the retrievers' preferences. The resulting model achieves state-of-the-art performance on two recent conversational search benchmarks, QReCC and TopiOCQA, outperforming existing baselines, including GPT-3.5. The contributions of the paper include defining optimal queries in conversational search, constructing RF COLLECTION, and aligning an open-source LM with retriever preferences. The experimental results demonstrate that RETPO significantly improves retrieval performance and shows promising generalization to other tasks.The paper introduces RetPO (Retriever's Preference Optimization), a novel framework designed to optimize a language model (LM) for reformulating search queries in line with the preferences of target retrieval systems. The process begins by prompting a large LM to produce various potential rewrites, followed by collecting retrieval performance as the retrievers' preferences. This results in a large-scale dataset called RF COLLECTION, containing over 410K query rewrites across 12K conversations. The dataset is then used to fine-tune a smaller LM, aligning it with the retrievers' preferences. The resulting model achieves state-of-the-art performance on two recent conversational search benchmarks, QReCC and TopiOCQA, outperforming existing baselines, including GPT-3.5. The contributions of the paper include defining optimal queries in conversational search, constructing RF COLLECTION, and aligning an open-source LM with retriever preferences. The experimental results demonstrate that RETPO significantly improves retrieval performance and shows promising generalization to other tasks.
Reach us at info@study.space
Understanding Ask Optimal Questions%3A Aligning Large Language Models with Retriever's Preference in Conversational Search