25 Oct 2024 | Yuchen Zhuang, Haotian Sun, Yue Yu, Rushi Qiang, Qifan Wang, Chao Zhang, Bo Dai
**Abstract:**
Personalization has become a critical area in modern intelligent systems, focusing on mining user behavior data and adapting to individual preferences. Despite the advanced few-shot capabilities of black-box large language models (LLMs), their opaque model parameters pose significant challenges in aligning generated outputs with user expectations. Existing solutions primarily rely on prompt design to incorporate user profiles and behaviors, but struggle to generalize effectively due to their inability to capture shared knowledge among users. To address these challenges, we propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. HYDRA uses a retrieval-augmented workflow, where a retriever extracts relevant user behaviors, and a reranker prioritizes useful information. An adapter aligns the output with individual user preferences, eliminating the need to access internal model parameters. Both the reranker and adapter are decomposed into a base model with multiple user-specific heads, resembling a hydra. The base model maintains shared knowledge, while the heads capture user-specific preferences. Experimental results on the LaMP benchmark show that HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks.
**Introduction:**
The paper introduces HYDRA, a model factorization framework for black-box LLM personalization. It addresses the challenge of aligning LLM outputs with individual user preferences without direct access to model parameters. HYDRA combines a retriever and a reranker to prioritize relevant historical data and an adapter to align LLM outputs with user preferences. The framework is evaluated on the LaMP benchmark, demonstrating significant improvements over existing methods in various personalization tasks.
**Related Works:**
The paper reviews existing approaches to LLM personalization, including in-context learning, profile-augmented prompting, and retrieval-augmented prompting. It highlights the limitations of these methods, such as the inability to capture shared knowledge and the need for explicit user preferences.
**Model Factorization for Personalization:**
HYDRA's model factorization approach decomposes the personalized model into a base model with shared knowledge and multiple user-specific heads. This allows for effective integration of global knowledge and individual preferences.
**Experiments:**
The paper presents experimental results on the LaMP benchmark, showing that HYDRA outperforms existing methods by an average relative improvement of 9.01%. Ablation studies and scale-up experiments further validate the effectiveness of HYDRA.
**Conclusions:**
HYDRA introduces a novel learning-based paradigm for black-box LLM personalization, enhancing user experience and accessibility. It addresses the challenge of aligning LLM outputs with individual preferences without requiring access to model parameters, making it a promising solution for human-centric intelligent systems.**Abstract:**
Personalization has become a critical area in modern intelligent systems, focusing on mining user behavior data and adapting to individual preferences. Despite the advanced few-shot capabilities of black-box large language models (LLMs), their opaque model parameters pose significant challenges in aligning generated outputs with user expectations. Existing solutions primarily rely on prompt design to incorporate user profiles and behaviors, but struggle to generalize effectively due to their inability to capture shared knowledge among users. To address these challenges, we propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. HYDRA uses a retrieval-augmented workflow, where a retriever extracts relevant user behaviors, and a reranker prioritizes useful information. An adapter aligns the output with individual user preferences, eliminating the need to access internal model parameters. Both the reranker and adapter are decomposed into a base model with multiple user-specific heads, resembling a hydra. The base model maintains shared knowledge, while the heads capture user-specific preferences. Experimental results on the LaMP benchmark show that HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks.
**Introduction:**
The paper introduces HYDRA, a model factorization framework for black-box LLM personalization. It addresses the challenge of aligning LLM outputs with individual user preferences without direct access to model parameters. HYDRA combines a retriever and a reranker to prioritize relevant historical data and an adapter to align LLM outputs with user preferences. The framework is evaluated on the LaMP benchmark, demonstrating significant improvements over existing methods in various personalization tasks.
**Related Works:**
The paper reviews existing approaches to LLM personalization, including in-context learning, profile-augmented prompting, and retrieval-augmented prompting. It highlights the limitations of these methods, such as the inability to capture shared knowledge and the need for explicit user preferences.
**Model Factorization for Personalization:**
HYDRA's model factorization approach decomposes the personalized model into a base model with shared knowledge and multiple user-specific heads. This allows for effective integration of global knowledge and individual preferences.
**Experiments:**
The paper presents experimental results on the LaMP benchmark, showing that HYDRA outperforms existing methods by an average relative improvement of 9.01%. Ablation studies and scale-up experiments further validate the effectiveness of HYDRA.
**Conclusions:**
HYDRA introduces a novel learning-based paradigm for black-box LLM personalization, enhancing user experience and accessibility. It addresses the challenge of aligning LLM outputs with individual preferences without requiring access to model parameters, making it a promising solution for human-centric intelligent systems.