PERSONALIZED PIECES: Efficient Personalized Large Language Models through Collaborative Efforts

PERSONALIZED PIECES: Efficient Personalized Large Language Models through Collaborative Efforts

15 Jun 2024 | Zhaoxuan Tan, Zheyuan Liu, Meng Jiang
The paper introduces PERSONALIZED PIECES (PER-PCS), a framework that enables users to safely share and assemble personalized parameter-efficient fine-tuning (PEFT) modules through collaborative efforts. PER-PCS allows users to share a small fraction of their PEFT parameters, enabling efficient and fine-grained LLM personalization without excessive storage or computational demands. The framework involves selecting sharers, breaking their PEFT parameters into pieces, training gates for each piece, and allowing target users to assemble personalized PEFT modules using their history data. This approach preserves user privacy and enables efficient personalization across six tasks in the LaMP benchmark, outperforming non-personalized and PEFT retrieval baselines while using significantly fewer resources compared to OPPU. Experimental results show that PER-PCS is 38 times more efficient in storage and 7 times more efficient in computation than OPPU. The framework is modular and scalable, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts. The study also highlights the robustness of PER-PCS against varying sharer counts, selection strategies, and parameter sharing ratios. The approach preserves sharer privacy and reduces the carbon footprint of PEFT-based personalized LLMs. The paper concludes that PER-PCS is a promising framework for enabling community-driven LLM personalization, balancing privacy, efficiency, and performance.The paper introduces PERSONALIZED PIECES (PER-PCS), a framework that enables users to safely share and assemble personalized parameter-efficient fine-tuning (PEFT) modules through collaborative efforts. PER-PCS allows users to share a small fraction of their PEFT parameters, enabling efficient and fine-grained LLM personalization without excessive storage or computational demands. The framework involves selecting sharers, breaking their PEFT parameters into pieces, training gates for each piece, and allowing target users to assemble personalized PEFT modules using their history data. This approach preserves user privacy and enables efficient personalization across six tasks in the LaMP benchmark, outperforming non-personalized and PEFT retrieval baselines while using significantly fewer resources compared to OPPU. Experimental results show that PER-PCS is 38 times more efficient in storage and 7 times more efficient in computation than OPPU. The framework is modular and scalable, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts. The study also highlights the robustness of PER-PCS against varying sharer counts, selection strategies, and parameter sharing ratios. The approach preserves sharer privacy and reduces the carbon footprint of PEFT-based personalized LLMs. The paper concludes that PER-PCS is a promising framework for enabling community-driven LLM personalization, balancing privacy, efficiency, and performance.
Reach us at info@study.space