Hello Again! LLM-powered Personalized Agent for Long-term Dialogue

Hello Again! LLM-powered Personalized Agent for Long-term Dialogue

13 Feb 2025 | Hao Li*, Chenghao Yang*, An Zhang†, Yang Deng, Xiang Wang, Tat-Seng Chua
The paper introduces LD-Agent, a model-agnostic framework designed to enhance long-term dialogue systems. LD-Agent addresses the need for personalized and companionship-oriented interactions in chatbots by incorporating three key modules: event perception, persona extraction, and response generation. The event memory module uses long-term and short-term memory banks to maintain historical context, while the persona module dynamically extracts and updates user and agent personas. These modules integrate to guide the response generation module in producing coherent and contextually appropriate responses. The effectiveness of LD-Agent is demonstrated through extensive experiments on various benchmarks, models, and tasks, showing superior performance in long-term dialogue tasks and strong generality across different datasets and models. The framework's ability to handle cross-domain and cross-task scenarios further highlights its practical potential. However, the research also acknowledges limitations, such as the lack of real-world datasets and the need for more sophisticated module designs.The paper introduces LD-Agent, a model-agnostic framework designed to enhance long-term dialogue systems. LD-Agent addresses the need for personalized and companionship-oriented interactions in chatbots by incorporating three key modules: event perception, persona extraction, and response generation. The event memory module uses long-term and short-term memory banks to maintain historical context, while the persona module dynamically extracts and updates user and agent personas. These modules integrate to guide the response generation module in producing coherent and contextually appropriate responses. The effectiveness of LD-Agent is demonstrated through extensive experiments on various benchmarks, models, and tasks, showing superior performance in long-term dialogue tasks and strong generality across different datasets and models. The framework's ability to handle cross-domain and cross-task scenarios further highlights its practical potential. However, the research also acknowledges limitations, such as the lack of real-world datasets and the need for more sophisticated module designs.
Reach us at info@study.space
Understanding Hello Again! LLM-powered Personalized Agent for Long-term Dialogue