Large Language Models for Next Point-of-Interest Recommendation

Large Language Models for Next Point-of-Interest Recommendation

July 14-18, 2024 | Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song, and Flora D. Salim
This paper proposes LLM4POI, a framework that uses pretrained large language models (LLMs) for next point-of-interest (POI) recommendation. The framework addresses the challenge of effectively utilizing contextual information in location-based social network (LBSN) data, which is often overlooked in previous methods. LLMs are capable of preserving heterogeneous LBSN data in its original format, thus avoiding the loss of contextual information. Additionally, LLMs can comprehend the inherent meaning of contextual information due to the inclusion of commonsense knowledge. The framework is tested on three real-world LBSN datasets and outperforms state-of-the-art models in all three datasets. The analysis shows that the framework effectively uses contextual information and alleviates the cold-start and short trajectory problems. The source code is available at: https://github.com/neolifer/LLM4POI. The paper also discusses related work, including sequence-based models, graph-based models, and LLMs for time-series data and recommender systems. The methodology includes trajectory prompting, key-query similarity, and supervised fine-tuning for LLMs. The experiments show that the framework performs well on different trajectory lengths and generalizes to unseen data. The paper concludes that LLMs have the potential to be used for next POI recommendation tasks, and future work includes addressing the limitations of LLMs and exploring chain-of-thought reasoning for the task.This paper proposes LLM4POI, a framework that uses pretrained large language models (LLMs) for next point-of-interest (POI) recommendation. The framework addresses the challenge of effectively utilizing contextual information in location-based social network (LBSN) data, which is often overlooked in previous methods. LLMs are capable of preserving heterogeneous LBSN data in its original format, thus avoiding the loss of contextual information. Additionally, LLMs can comprehend the inherent meaning of contextual information due to the inclusion of commonsense knowledge. The framework is tested on three real-world LBSN datasets and outperforms state-of-the-art models in all three datasets. The analysis shows that the framework effectively uses contextual information and alleviates the cold-start and short trajectory problems. The source code is available at: https://github.com/neolifer/LLM4POI. The paper also discusses related work, including sequence-based models, graph-based models, and LLMs for time-series data and recommender systems. The methodology includes trajectory prompting, key-query similarity, and supervised fine-tuning for LLMs. The experiments show that the framework performs well on different trajectory lengths and generalizes to unseen data. The paper concludes that LLMs have the potential to be used for next POI recommendation tasks, and future work includes addressing the limitations of LLMs and exploring chain-of-thought reasoning for the task.
Reach us at info@study.space