2018 | Zhiyu Hu, Yang Zhang, Minghao Xiao, Wenjie Wang, Fuli Feng, Xiangnan He
This paper introduces the Adapter Partition and Aggregation (APA) framework for exact and efficient unlearning in Large Language Model-based Recommendation (LLMRec). LLMRec customizes large language models (LLMs) using parameter-efficient fine-tuning (PEFT) with recommendation data. However, incorporating user data into LLMs raises privacy concerns, necessitating unlearning to remove unusable data (e.g., historical behaviors) from established models. Existing unlearning methods are insufficient due to high computational costs or incomplete data erasure.
The APA framework addresses these challenges by partitioning training data into shards and training individual adapters for each shard. Only the adapters affected by unusable data are retrained, reducing computational costs. During inference, parameter-level adapter aggregation with sample-adaptive attention is employed to maintain recommendation performance and reduce inference costs. This approach ensures exact unlearning while preserving efficiency.
The APA framework is evaluated on two real-world datasets, Book and Movie, demonstrating its effectiveness in maintaining recommendation performance and unlearning efficiency. The results show that APA outperforms existing methods in terms of recommendation accuracy and unlearning speed. The framework also shows that smaller shard sizes can improve unlearning efficiency without compromising performance.
The APA framework is designed for LLMRec, leveraging the strengths of parameter-efficient fine-tuning and adaptive aggregation. It addresses the unique challenges of LLMRec, including high computational costs and the need for complete data removal. The framework is flexible and can be extended to other PEFT methods, enhancing its applicability across diverse LLMRec architectures. The study highlights the importance of semantic-aware data partitioning and adaptive aggregation in achieving efficient and effective unlearning in LLMRec.This paper introduces the Adapter Partition and Aggregation (APA) framework for exact and efficient unlearning in Large Language Model-based Recommendation (LLMRec). LLMRec customizes large language models (LLMs) using parameter-efficient fine-tuning (PEFT) with recommendation data. However, incorporating user data into LLMs raises privacy concerns, necessitating unlearning to remove unusable data (e.g., historical behaviors) from established models. Existing unlearning methods are insufficient due to high computational costs or incomplete data erasure.
The APA framework addresses these challenges by partitioning training data into shards and training individual adapters for each shard. Only the adapters affected by unusable data are retrained, reducing computational costs. During inference, parameter-level adapter aggregation with sample-adaptive attention is employed to maintain recommendation performance and reduce inference costs. This approach ensures exact unlearning while preserving efficiency.
The APA framework is evaluated on two real-world datasets, Book and Movie, demonstrating its effectiveness in maintaining recommendation performance and unlearning efficiency. The results show that APA outperforms existing methods in terms of recommendation accuracy and unlearning speed. The framework also shows that smaller shard sizes can improve unlearning efficiency without compromising performance.
The APA framework is designed for LLMRec, leveraging the strengths of parameter-efficient fine-tuning and adaptive aggregation. It addresses the unique challenges of LLMRec, including high computational costs and the need for complete data removal. The framework is flexible and can be extended to other PEFT methods, enhancing its applicability across diverse LLMRec architectures. The study highlights the importance of semantic-aware data partitioning and adaptive aggregation in achieving efficient and effective unlearning in LLMRec.