Personalized LLM Response Generation with Parameterized User Memory Injection

Personalized LLM Response Generation with Parameterized User Memory Injection

14 Jan 2025 | Kai Zhang, Yejin Kim, Xiaozhong Liu
The paper introduces a novel approach called Memory-injected LLM Personalization (MiLP) to enhance the personalization of Large Language Models (LLMs). MiLP leverages parameter-efficient fine-tuning (PEFT) and Bayesian Optimization to inject and search for user historical content, aiming to generate more tailored responses. The method uses Low-Rank Adaption (LoRA) modules to parameterize and inject user memory into the LLM's feed-forward layers, allowing the model to understand and utilize user-specific information effectively. Extensive experiments on three datasets show that MiLP outperforms existing baselines in terms of ROUGE-L scores and persona-F1 scores, demonstrating its effectiveness and superiority. The paper also includes a quality study and ablation studies to validate the components and effectiveness of MiLP. Future work will focus on scalability with larger user groups and LLMs, enhancing the LLM's ability to understand user needs, and integrating shared information and user graphs.The paper introduces a novel approach called Memory-injected LLM Personalization (MiLP) to enhance the personalization of Large Language Models (LLMs). MiLP leverages parameter-efficient fine-tuning (PEFT) and Bayesian Optimization to inject and search for user historical content, aiming to generate more tailored responses. The method uses Low-Rank Adaption (LoRA) modules to parameterize and inject user memory into the LLM's feed-forward layers, allowing the model to understand and utilize user-specific information effectively. Extensive experiments on three datasets show that MiLP outperforms existing baselines in terms of ROUGE-L scores and persona-F1 scores, demonstrating its effectiveness and superiority. The paper also includes a quality study and ablation studies to validate the components and effectiveness of MiLP. Future work will focus on scalability with larger user groups and LLMs, enhancing the LLM's ability to understand user needs, and integrating shared information and user graphs.
Reach us at info@study.space