Item-side Fairness of Large Language Model-based Recommendation System

Item-side Fairness of Large Language Model-based Recommendation System

May 13–17, 2024, Singapore, Singapore | Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He
This paper investigates the item-side fairness of Large Language Model-based Recommendation Systems (LRS) and addresses the unique challenges posed by LLMs, which can introduce societal biases. The study highlights the need for comprehensive investigation into item-side fairness in LRS due to their distinct characteristics compared to conventional recommendation systems. To bridge this gap, the authors develop IfairLRS, a framework that enhances item-side fairness in LRS by calibrating recommendations during both the in-learning and post-learning stages. The framework includes reweighting training samples to reduce bias and reranking recommendations to address unfairness. Extensive experiments on MovieLens and Steam datasets demonstrate significant improvements in item-side fairness without compromising recommendation accuracy. The findings underscore the importance of adapting conventional fairness methods to LRS to ensure equitable information access and exposure opportunities for vulnerable populations.This paper investigates the item-side fairness of Large Language Model-based Recommendation Systems (LRS) and addresses the unique challenges posed by LLMs, which can introduce societal biases. The study highlights the need for comprehensive investigation into item-side fairness in LRS due to their distinct characteristics compared to conventional recommendation systems. To bridge this gap, the authors develop IfairLRS, a framework that enhances item-side fairness in LRS by calibrating recommendations during both the in-learning and post-learning stages. The framework includes reweighting training samples to reduce bias and reranking recommendations to address unfairness. Extensive experiments on MovieLens and Steam datasets demonstrate significant improvements in item-side fairness without compromising recommendation accuracy. The findings underscore the importance of adapting conventional fairness methods to LRS to ensure equitable information access and exposure opportunities for vulnerable populations.
Reach us at info@study.space