Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning

Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning

2018 | Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, Meng Jiang
This paper introduces One PEFT Per User (OPPU), a novel approach for personalizing large language models (LLMs) by enabling individual users to own and fine-tune their own LLMs using personalized parameter-efficient fine-tuning (PEFT) modules. OPPU addresses the challenges of LLM personalization, including model ownership and user behavior shifts, by integrating both parametric and non-parametric user knowledge. The parametric knowledge is stored in the PEFT module, which is tailored to each user's behavior history and preferences, while the non-parametric knowledge is derived from retrieval and user profiles. OPPU outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark, demonstrating superior performance in handling user behavior shifts, modeling users at different active levels, maintaining robustness across various user history formats, and displaying versatility with different PEFT methods. The proposed approach ensures LLM ownership and enhances model customization, making personalized LLMs more accessible and effective for individual users. The results show that OPPU achieves state-of-the-art performance on all seven public tasks in the LaMP benchmark, highlighting its effectiveness in democratizing personalized LLMs.This paper introduces One PEFT Per User (OPPU), a novel approach for personalizing large language models (LLMs) by enabling individual users to own and fine-tune their own LLMs using personalized parameter-efficient fine-tuning (PEFT) modules. OPPU addresses the challenges of LLM personalization, including model ownership and user behavior shifts, by integrating both parametric and non-parametric user knowledge. The parametric knowledge is stored in the PEFT module, which is tailored to each user's behavior history and preferences, while the non-parametric knowledge is derived from retrieval and user profiles. OPPU outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark, demonstrating superior performance in handling user behavior shifts, modeling users at different active levels, maintaining robustness across various user history formats, and displaying versatility with different PEFT methods. The proposed approach ensures LLM ownership and enhances model customization, making personalized LLMs more accessible and effective for individual users. The results show that OPPU achieves state-of-the-art performance on all seven public tasks in the LaMP benchmark, highlighting its effectiveness in democratizing personalized LLMs.
Reach us at info@study.space
[slides] Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning | StudySpace