8 May 2024 | Chunxu Zhang, Guodong Long, Hongkuan Guo, Xiao Fang, Yang Song, Zhaojie Liu, Guorui Zhou, Zijian Zhang, Yang Liu, Bo Yang
This paper proposes a novel federated adaptation mechanism, named FedPA, to enhance foundation model-based recommendation systems while preserving user privacy. FedPA enables the integration of rich knowledge from pre-trained models while maintaining user privacy. The method involves learning lightweight personalized adapters on each client using local data, which collaborate with pre-trained foundation models to provide efficient and personalized recommendation services. The adapters learn user preferences in a lightweight manner and are then fused with the pre-trained model in an adaptive manner to balance common knowledge and user personalization. The model ensures that user data remains secure as it is not shared with the server, leveraging the federated learning framework for privacy preservation.
FedPA introduces a personalized low-rank adapter to capture user personalization at both the user-level and user-group-level. It also incorporates an adaptive gate learning mechanism that dynamically learns weights for common knowledge and user personalization, enabling effective knowledge fusion. During federated optimization, FedPA focuses on updating only the parameters relevant to user-specific modeling, significantly reducing communication costs and achieving faster convergence.
Extensive experiments on four benchmark datasets demonstrate that FedPA outperforms existing baselines in performance. The method also shows excellent feasibility in deploying on clients with limited computational capabilities and enhances user privacy protection in federated recommendation systems. Additionally, FedPA addresses the challenge of deploying large pre-trained models on edge devices by distilling the model into a smaller size, reducing computational and storage requirements. The method also incorporates Local Differential Privacy to further enhance privacy protection, ensuring stable performance with distilled models and effective privacy preservation.
The main contributions of this paper include: (1) introducing the federated adaptation paradigm for foundation model-based recommendation, named FedPA, which enables the integration of rich knowledge from pre-trained models while preserving user privacy; (2) presenting a personalized low-rank adapter to learn user personalization from user-level and user-group-level in a lightweight manner, along with an adaptive gate learning mechanism to dynamically learn weights for effective knowledge fusion; and (3) demonstrating the superior performance of FedPA on four benchmark datasets and its feasibility in deploying on clients with limited computational capabilities.This paper proposes a novel federated adaptation mechanism, named FedPA, to enhance foundation model-based recommendation systems while preserving user privacy. FedPA enables the integration of rich knowledge from pre-trained models while maintaining user privacy. The method involves learning lightweight personalized adapters on each client using local data, which collaborate with pre-trained foundation models to provide efficient and personalized recommendation services. The adapters learn user preferences in a lightweight manner and are then fused with the pre-trained model in an adaptive manner to balance common knowledge and user personalization. The model ensures that user data remains secure as it is not shared with the server, leveraging the federated learning framework for privacy preservation.
FedPA introduces a personalized low-rank adapter to capture user personalization at both the user-level and user-group-level. It also incorporates an adaptive gate learning mechanism that dynamically learns weights for common knowledge and user personalization, enabling effective knowledge fusion. During federated optimization, FedPA focuses on updating only the parameters relevant to user-specific modeling, significantly reducing communication costs and achieving faster convergence.
Extensive experiments on four benchmark datasets demonstrate that FedPA outperforms existing baselines in performance. The method also shows excellent feasibility in deploying on clients with limited computational capabilities and enhances user privacy protection in federated recommendation systems. Additionally, FedPA addresses the challenge of deploying large pre-trained models on edge devices by distilling the model into a smaller size, reducing computational and storage requirements. The method also incorporates Local Differential Privacy to further enhance privacy protection, ensuring stable performance with distilled models and effective privacy preservation.
The main contributions of this paper include: (1) introducing the federated adaptation paradigm for foundation model-based recommendation, named FedPA, which enables the integration of rich knowledge from pre-trained models while preserving user privacy; (2) presenting a personalized low-rank adapter to learn user personalization from user-level and user-group-level in a lightweight manner, along with an adaptive gate learning mechanism to dynamically learn weights for effective knowledge fusion; and (3) demonstrating the superior performance of FedPA on four benchmark datasets and its feasibility in deploying on clients with limited computational capabilities.