Enhancing Recommendation Diversity by Re-ranking with Large Language Models

Enhancing Recommendation Diversity by Re-ranking with Large Language Models

17 Jun 2024 | DIEGO CARRARO, DEREK BRIDGE
This paper explores how Large Language Models (LLMs) can be used to enhance the diversity of recommendations in Recommender Systems (RSs) through re-ranking. The study investigates whether LLMs can interpret and perform re-ranking tasks, particularly in the context of item diversity. The authors propose a methodology where LLMs are prompted to generate diverse rankings from a candidate list using various prompt templates. They conduct experiments on two datasets—anime movies and books—using state-of-the-art LLMs such as ChatGPT and Llama2. The results show that LLM-based re-rankers outperform random re-rankers in terms of diversity but are still inferior to traditional re-rankers. However, LLMs exhibit improved performance on natural language processing and recommendation tasks with lower inference costs. The study also highlights the trade-offs between performance, costs, and other factors like data control and domain generalization. The findings suggest that LLM-based re-ranking has significant potential and could become more competitive as LLMs improve and costs decrease. The paper also compares LLM-based re-ranking with traditional methods like MMR, xQuAD, and RxQuAD, showing that while LLMs are not yet as effective as traditional methods, they offer a promising alternative for enhancing recommendation diversity.This paper explores how Large Language Models (LLMs) can be used to enhance the diversity of recommendations in Recommender Systems (RSs) through re-ranking. The study investigates whether LLMs can interpret and perform re-ranking tasks, particularly in the context of item diversity. The authors propose a methodology where LLMs are prompted to generate diverse rankings from a candidate list using various prompt templates. They conduct experiments on two datasets—anime movies and books—using state-of-the-art LLMs such as ChatGPT and Llama2. The results show that LLM-based re-rankers outperform random re-rankers in terms of diversity but are still inferior to traditional re-rankers. However, LLMs exhibit improved performance on natural language processing and recommendation tasks with lower inference costs. The study also highlights the trade-offs between performance, costs, and other factors like data control and domain generalization. The findings suggest that LLM-based re-ranking has significant potential and could become more competitive as LLMs improve and costs decrease. The paper also compares LLM-based re-ranking with traditional methods like MMR, xQuAD, and RxQuAD, showing that while LLMs are not yet as effective as traditional methods, they offer a promising alternative for enhancing recommendation diversity.
Reach us at info@study.space