Optimization Methods for Personalizing Large Language Models through Retrieval Augmentation

Optimization Methods for Personalizing Large Language Models through Retrieval Augmentation

July 14–18, 2024 | Alireza Salemi, Surya Kallumadi, Hamed Zamani
This paper explores the optimization of retrieval-augmented approaches for personalizing large language models (LLMs). The authors propose two optimization algorithms: one based on reinforcement learning and another on knowledge distillation, both aimed at improving the retrieval of personalized documents for LLMs. They also introduce a retriever selection model that decides which retrieval model to use for each input, addressing the need for multiple retrieval models to handle different aspects of personalization. Extensive experiments on the LaMP benchmark, which includes seven diverse personalization tasks, show statistically significant improvements in six out of seven datasets. The best-performing method achieves an average improvement of 5.5% across all LaMP datasets, with an average improvement of 15.3% compared to non-personalized LLMs. The paper highlights the effectiveness of the proposed methods in enhancing personalized text generation performance and provides insights into the impact of each component in the pipeline.This paper explores the optimization of retrieval-augmented approaches for personalizing large language models (LLMs). The authors propose two optimization algorithms: one based on reinforcement learning and another on knowledge distillation, both aimed at improving the retrieval of personalized documents for LLMs. They also introduce a retriever selection model that decides which retrieval model to use for each input, addressing the need for multiple retrieval models to handle different aspects of personalization. Extensive experiments on the LaMP benchmark, which includes seven diverse personalization tasks, show statistically significant improvements in six out of seven datasets. The best-performing method achieves an average improvement of 5.5% across all LaMP datasets, with an average improvement of 15.3% compared to non-personalized LLMs. The paper highlights the effectiveness of the proposed methods in enhancing personalized text generation performance and provides insights into the impact of each component in the pipeline.
Reach us at info@study.space
[slides and audio] Optimization Methods for Personalizing Large Language Models through Retrieval Augmentation