HYDRA: Model Factorization Framework for Black-Box LLM Personalization

HYDRA: Model Factorization Framework for Black-Box LLM Personalization

25 Oct 2024 | Yuchen Zhuang¹, Haotian Sun¹, Yue Yu¹, Rushi Qiang¹, Qifan Wang², Chao Zhang¹, Bo Dai¹
HYDRA is a model factorization framework designed for black-box large language model (LLM) personalization. It addresses the challenge of aligning LLM outputs with individual user preferences without access to model parameters. The framework captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. HYDRA employs a retrieval-augmented workflow, where a retriever initially extracts relevant user behaviors from historical data for effective user-specific preference identification. To achieve personalized generation, HYDRA focuses on training two fundamental components: (1) a personalized reranker to prioritize useful user information from the retrieved records, and (2) a personalized adapter to align black-box LLM outputs with user-specific preferences, without requiring access to internal model parameters. Both the reranker and the adapter can be decomposed into a base model with multiple personalized heads, resembling a hydra. The base model maintains shared knowledge across users, while multiple personal heads capture user-specific preferences. HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks in the LaMP benchmark. The framework is evaluated on LaMP, a comprehensive language model personalization benchmark, and demonstrates robust performance across multiple text classification and generation tasks. HYDRA's effectiveness is attributed to its ability to integrate shared knowledge with user-specific preferences, enhancing generalization across the entire user group. The framework is implemented and made available for transparency and reproducibility. The main contributions of HYDRA include a black-box LLM personalization framework that effectively mines user behavior history and adapts to user preferences for enhanced user experience, integrates shared (global) knowledge from the base model and individual (local) preference from multiple user-specific heads through model factorization to deliver generalizable personalization, and significantly outperforms existing prompt-based methods across five diverse tasks in the LaMP benchmark.HYDRA is a model factorization framework designed for black-box large language model (LLM) personalization. It addresses the challenge of aligning LLM outputs with individual user preferences without access to model parameters. The framework captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. HYDRA employs a retrieval-augmented workflow, where a retriever initially extracts relevant user behaviors from historical data for effective user-specific preference identification. To achieve personalized generation, HYDRA focuses on training two fundamental components: (1) a personalized reranker to prioritize useful user information from the retrieved records, and (2) a personalized adapter to align black-box LLM outputs with user-specific preferences, without requiring access to internal model parameters. Both the reranker and the adapter can be decomposed into a base model with multiple personalized heads, resembling a hydra. The base model maintains shared knowledge across users, while multiple personal heads capture user-specific preferences. HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks in the LaMP benchmark. The framework is evaluated on LaMP, a comprehensive language model personalization benchmark, and demonstrates robust performance across multiple text classification and generation tasks. HYDRA's effectiveness is attributed to its ability to integrate shared knowledge with user-specific preferences, enhancing generalization across the entire user group. The framework is implemented and made available for transparency and reproducibility. The main contributions of HYDRA include a black-box LLM personalization framework that effectively mines user behavior history and adapts to user preferences for enhanced user experience, integrates shared (global) knowledge from the base model and individual (local) preference from multiple user-specific heads through model factorization to deliver generalizable personalization, and significantly outperforms existing prompt-based methods across five diverse tasks in the LaMP benchmark.
Reach us at info@futurestudyspace.com
[slides and audio] HYDRA%3A Model Factorization Framework for Black-Box LLM Personalization