May 13-17, 2024 | Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He
This paper investigates the item-side fairness of Large Language Model-based Recommendation Systems (LRS) and compares it with traditional recommendation systems. The study reveals that LRS is significantly influenced by the popularity factor and the inherent semantic biases of Large Language Models (LLMs). These biases can lead to unfair treatment of different item groups, such as over-recommending popular items or under-recommending less popular ones. The research also finds that LRS tends to over-recommend items from high-popularity genres and under-recommend items from low-popularity genres, which is attributed to the semantic biases in LLMs.
To address these issues, the authors propose a framework called IFairLRS, which enhances item-side fairness by incorporating reweighting and reranking strategies. The reweighting strategy adjusts the weights of training samples to reduce the impact of biased data, while the reranking strategy introduces a punishment term to adjust the recommendations based on fairness metrics. The framework is tested on two real-world datasets, MovieLens1M and Steam, and shows significant improvements in item-side fairness without compromising recommendation accuracy.
The study concludes that improving item-side fairness in LRS is crucial for ensuring equitable information access and exposure opportunities for vulnerable populations. The proposed framework, IFairLRS, provides a practical solution to enhance fairness in LRS while maintaining the effectiveness of recommendations. Future work will focus on designing more effective fairness-oriented methods tailored for LRS and exploring fairness at the individual and long-term levels.This paper investigates the item-side fairness of Large Language Model-based Recommendation Systems (LRS) and compares it with traditional recommendation systems. The study reveals that LRS is significantly influenced by the popularity factor and the inherent semantic biases of Large Language Models (LLMs). These biases can lead to unfair treatment of different item groups, such as over-recommending popular items or under-recommending less popular ones. The research also finds that LRS tends to over-recommend items from high-popularity genres and under-recommend items from low-popularity genres, which is attributed to the semantic biases in LLMs.
To address these issues, the authors propose a framework called IFairLRS, which enhances item-side fairness by incorporating reweighting and reranking strategies. The reweighting strategy adjusts the weights of training samples to reduce the impact of biased data, while the reranking strategy introduces a punishment term to adjust the recommendations based on fairness metrics. The framework is tested on two real-world datasets, MovieLens1M and Steam, and shows significant improvements in item-side fairness without compromising recommendation accuracy.
The study concludes that improving item-side fairness in LRS is crucial for ensuring equitable information access and exposure opportunities for vulnerable populations. The proposed framework, IFairLRS, provides a practical solution to enhance fairness in LRS while maintaining the effectiveness of recommendations. Future work will focus on designing more effective fairness-oriented methods tailored for LRS and exploring fairness at the individual and long-term levels.