The paper "Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning" by Zhaoxuan Tan introduces a novel approach called One PEFT Per User (OPPU) to personalize large language models (LLMs). OPPU addresses the challenges of model ownership and behavior shift in LLM personalization by employing personalized parameter-efficient fine-tuning (PEFT) modules. These modules store user-specific behavior patterns and preferences, allowing users to own and customize their LLMs. OPPU integrates parametric user knowledge from personal PEFT parameters with non-parametric knowledge from behavior history retrieval and textual profiles. Experimental results on the LaMP benchmark demonstrate that OPPU outperforms existing prompt-based methods across seven diverse tasks, showing superior generalization in scenarios of user behavior shifts. The paper also highlights OPPU's robustness against varying user history formats and its versatility with different PEFT methods. The contributions of OPPU include a pioneering approach to PEFT-based LLM personalization, ensuring model ownership and enhancing customization.The paper "Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning" by Zhaoxuan Tan introduces a novel approach called One PEFT Per User (OPPU) to personalize large language models (LLMs). OPPU addresses the challenges of model ownership and behavior shift in LLM personalization by employing personalized parameter-efficient fine-tuning (PEFT) modules. These modules store user-specific behavior patterns and preferences, allowing users to own and customize their LLMs. OPPU integrates parametric user knowledge from personal PEFT parameters with non-parametric knowledge from behavior history retrieval and textual profiles. Experimental results on the LaMP benchmark demonstrate that OPPU outperforms existing prompt-based methods across seven diverse tasks, showing superior generalization in scenarios of user behavior shifts. The paper also highlights OPPU's robustness against varying user history formats and its versatility with different PEFT methods. The contributions of OPPU include a pioneering approach to PEFT-based LLM personalization, ensuring model ownership and enhancing customization.