STaR-GATE: Teaching Language Models to Ask Clarifying Questions

STaR-GATE: Teaching Language Models to Ask Clarifying Questions

7 Aug 2024 | Chinmaya Andukuri, Jan-Philipp Fränken, Tobias Gerstenberg, Noah D. Goodman
**STaR-GATE: Teaching Language Models to Ask Clarifying Questions** **Authors:** Chinmaya Andukuri **Abstract:** When prompting language models to complete tasks, users often leave important aspects unsaid, leading to ambiguity. While asking questions can resolve this ambiguity, models often struggle to generate effective queries. This paper explores a method to improve a language model's ability to self-improve by rewarding it for generating useful questions, a technique dubbed STaR-GATE. The authors generate a synthetic dataset of 25,500 unique persona-task prompts to simulate conversations between a pretrained language model (Questioner) and a Roleplayer with unknown preferences. The Questioner is iteratively fine-tuned on questions that increase the probability of high-quality responses generated by an Oracle with access to the Roleplayer's preferences. After two iterations, the Questioner asks better questions, leading to responses that are preferred over those from the initial model on 72% of tasks. The results indicate that teaching a language model to ask better questions leads to more personalized and effective responses. **Contributions:** 1. Introduction of STaR-GATE, a simple algorithm that iteratively improves a language model's ability to elicit user preferences through questioning. 2. Generation of a synthetic dataset consisting of 25,500 unique persona-task-response prompts. 3. Demonstration that finetuning with STaR-GATE increases the probability of generating gold responses and win rates compared to the initial model. 4. Show that adding response regularization to STaR-GATE yields a model that can use elicited preferences to generate better responses, achieving a 72% win rate against the initial model. 5. demonstrate that the finetuned model generalizes beyond the roleplayer it was trained against. **Related Work:** The paper reviews existing approaches to preference optimization and elicitation, highlighting the limitations of current methods and the importance of effective questioning in high-stakes domains like healthcare and education. **Evaluation:** The performance of the Questioner is evaluated using two metrics: log probabilities of generating gold responses and win rates. The results show that log probabilities of gold responses increase over iterations, and win rates for the STaR-GATE model peak at 72% after two iterations. **Ablations:** The paper includes ablation studies to understand the impact of different design choices, such as the roleplayer's capability and the importance of regularization during training. **Limitations and Future Work:** The work is limited by its dependence on gold responses and the need for a strong model as an oracle. Future work could explore alternative optimization methods and evaluate the model across different domains. **Conclusion:** STaR-GATE significantly enhances a language model's ability to engage in effective dialog through targeted questioning, providing personalized responses that are more aligned with user preferences.**STaR-GATE: Teaching Language Models to Ask Clarifying Questions** **Authors:** Chinmaya Andukuri **Abstract:** When prompting language models to complete tasks, users often leave important aspects unsaid, leading to ambiguity. While asking questions can resolve this ambiguity, models often struggle to generate effective queries. This paper explores a method to improve a language model's ability to self-improve by rewarding it for generating useful questions, a technique dubbed STaR-GATE. The authors generate a synthetic dataset of 25,500 unique persona-task prompts to simulate conversations between a pretrained language model (Questioner) and a Roleplayer with unknown preferences. The Questioner is iteratively fine-tuned on questions that increase the probability of high-quality responses generated by an Oracle with access to the Roleplayer's preferences. After two iterations, the Questioner asks better questions, leading to responses that are preferred over those from the initial model on 72% of tasks. The results indicate that teaching a language model to ask better questions leads to more personalized and effective responses. **Contributions:** 1. Introduction of STaR-GATE, a simple algorithm that iteratively improves a language model's ability to elicit user preferences through questioning. 2. Generation of a synthetic dataset consisting of 25,500 unique persona-task-response prompts. 3. Demonstration that finetuning with STaR-GATE increases the probability of generating gold responses and win rates compared to the initial model. 4. Show that adding response regularization to STaR-GATE yields a model that can use elicited preferences to generate better responses, achieving a 72% win rate against the initial model. 5. demonstrate that the finetuned model generalizes beyond the roleplayer it was trained against. **Related Work:** The paper reviews existing approaches to preference optimization and elicitation, highlighting the limitations of current methods and the importance of effective questioning in high-stakes domains like healthcare and education. **Evaluation:** The performance of the Questioner is evaluated using two metrics: log probabilities of generating gold responses and win rates. The results show that log probabilities of gold responses increase over iterations, and win rates for the STaR-GATE model peak at 72% after two iterations. **Ablations:** The paper includes ablation studies to understand the impact of different design choices, such as the roleplayer's capability and the importance of regularization during training. **Limitations and Future Work:** The work is limited by its dependence on gold responses and the need for a strong model as an oracle. Future work could explore alternative optimization methods and evaluate the model across different domains. **Conclusion:** STaR-GATE significantly enhances a language model's ability to engage in effective dialog through targeted questioning, providing personalized responses that are more aligned with user preferences.
Reach us at info@study.space
Understanding STaR-GATE%3A Teaching Language Models to Ask Clarifying Questions