The paper introduces RetPO (Retriever's Preference Optimization), a novel framework designed to optimize a language model (LM) for reformulating search queries in line with the preferences of target retrieval systems. The process begins by prompting a large LM to produce various potential rewrites, followed by collecting retrieval performance as the retrievers' preferences. This results in a large-scale dataset called RF COLLECTION, containing over 410K query rewrites across 12K conversations. The dataset is then used to fine-tune a smaller LM, aligning it with the retrievers' preferences. The resulting model achieves state-of-the-art performance on two recent conversational search benchmarks, QReCC and TopiOCQA, outperforming existing baselines, including GPT-3.5. The contributions of the paper include defining optimal queries in conversational search, constructing RF COLLECTION, and aligning an open-source LM with retriever preferences. The experimental results demonstrate that RETPO significantly improves retrieval performance and shows promising generalization to other tasks.The paper introduces RetPO (Retriever's Preference Optimization), a novel framework designed to optimize a language model (LM) for reformulating search queries in line with the preferences of target retrieval systems. The process begins by prompting a large LM to produce various potential rewrites, followed by collecting retrieval performance as the retrievers' preferences. This results in a large-scale dataset called RF COLLECTION, containing over 410K query rewrites across 12K conversations. The dataset is then used to fine-tune a smaller LM, aligning it with the retrievers' preferences. The resulting model achieves state-of-the-art performance on two recent conversational search benchmarks, QReCC and TopiOCQA, outperforming existing baselines, including GPT-3.5. The contributions of the paper include defining optimal queries in conversational search, constructing RF COLLECTION, and aligning an open-source LM with retriever preferences. The experimental results demonstrate that RETPO significantly improves retrieval performance and shows promising generalization to other tasks.