Doing Personal LAPS: LLM-Augmented Dialogue Construction for Personalized Multi-Session Conversational Search

Doing Personal LAPS: LLM-Augmented Dialogue Construction for Personalized Multi-Session Conversational Search

July 14–18, 2024, Washington, DC, USA | Hideaki Joko, Shubham Chatterjee, Andrew Ramsay, Arjen P. de Vries, Jeff Dalton, Faegheh Hasibi
The paper introduces LAPS (LLM-Augmented Personalized Self-Dialogue), a method for collecting large-scale, multi-session, and multi-domain conversational datasets that include user preferences. LAPS leverages large language models (LLMs) to guide human workers in generating personalized dialogues, addressing the challenge of creating realistic and diverse conversations. The method involves four key elements: dialogue act classification, guidance generation, utterance composition, and preference extraction. LAPS collects 1,406 multi-domain, multi-session dialogues with 11,215 extracted preferences. The collected dataset is used to train a preference extraction model and a personalized recommendation system. Experiments show that LAPS-generated conversations are as diverse and high-quality as those created by experts, and that incorporating extracted preferences into the recommendation system improves the accuracy and explainability of recommendations. The paper also discusses the benefits of using a preference memory to enhance the effective utilization of user preferences in recommendations. Overall, LAPS provides a scalable and effective approach to creating realistic personalized conversational data.The paper introduces LAPS (LLM-Augmented Personalized Self-Dialogue), a method for collecting large-scale, multi-session, and multi-domain conversational datasets that include user preferences. LAPS leverages large language models (LLMs) to guide human workers in generating personalized dialogues, addressing the challenge of creating realistic and diverse conversations. The method involves four key elements: dialogue act classification, guidance generation, utterance composition, and preference extraction. LAPS collects 1,406 multi-domain, multi-session dialogues with 11,215 extracted preferences. The collected dataset is used to train a preference extraction model and a personalized recommendation system. Experiments show that LAPS-generated conversations are as diverse and high-quality as those created by experts, and that incorporating extracted preferences into the recommendation system improves the accuracy and explainability of recommendations. The paper also discusses the benefits of using a preference memory to enhance the effective utilization of user preferences in recommendations. Overall, LAPS provides a scalable and effective approach to creating realistic personalized conversational data.
Reach us at info@study.space