PERSONALIZED PIECES: Efficient Personalized Large Language Models through Collaborative Efforts

PERSONALIZED PIECES: Efficient Personalized Large Language Models through Collaborative Efforts

15 Jun 2024 | Zhaoxuan Tan, Zheyuan Liu, Meng Jiang
The paper introduces PER-PCS (Personalized PIEces), a framework that enables efficient and collaborative sharing of personalized parameter-efficient fine-tuning (PEFT) parameters among users. PER-PCS addresses the limitations of individual PEFT methods, which are costly and limit communal benefits. The framework involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show that PER-PCS outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. The framework's robustness is highlighted through analysis of sharer count, selection strategy, piece sharing ratio, and scalability in computation time and storage space. PER-PCS promotes safe sharing and makes LLM personalization more efficient, effective, and widely accessible through collaborative efforts.The paper introduces PER-PCS (Personalized PIEces), a framework that enables efficient and collaborative sharing of personalized parameter-efficient fine-tuning (PEFT) parameters among users. PER-PCS addresses the limitations of individual PEFT methods, which are costly and limit communal benefits. The framework involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show that PER-PCS outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. The framework's robustness is highlighted through analysis of sharer count, selection strategy, piece sharing ratio, and scalability in computation time and storage space. PER-PCS promotes safe sharing and makes LLM personalization more efficient, effective, and widely accessible through collaborative efforts.
Reach us at info@study.space
Understanding Personalized Pieces%3A Efficient Personalized Large Language Models through Collaborative Efforts