This paper proposes a novel framework for text-rich sequential recommendation using Large Language Models (LLMs), called LLM-TRSR. The main challenge in text-rich sequential recommendation is the long text length required to capture user behavior, which poses limitations on LLMs due to input length constraints, computational overhead, and performance issues. To address these challenges, the authors design a framework that first segments user behavior into blocks, then uses an LLM-based summarizer to generate a summary of user preferences. Two summarization techniques are introduced: hierarchical summarization and recurrent summarization. Hierarchical summarization involves recursively summarizing blocks to obtain a higher-level summary, while recurrent summarization iteratively updates the summary as new blocks are processed. The user preference summary, along with recent interactions and candidate item information, is then used to train an LLM-based recommender using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA) for efficient fine-tuning. The framework is evaluated on two public datasets: Amazon-M2 for product recommendation and MIND for news recommendation. The results show that the proposed method outperforms existing approaches in terms of recommendation accuracy. The paper also discusses the impact of the number of historical items and parameter size on model performance, highlighting the effectiveness of the proposed summarization techniques in handling text-rich sequential recommendation scenarios.This paper proposes a novel framework for text-rich sequential recommendation using Large Language Models (LLMs), called LLM-TRSR. The main challenge in text-rich sequential recommendation is the long text length required to capture user behavior, which poses limitations on LLMs due to input length constraints, computational overhead, and performance issues. To address these challenges, the authors design a framework that first segments user behavior into blocks, then uses an LLM-based summarizer to generate a summary of user preferences. Two summarization techniques are introduced: hierarchical summarization and recurrent summarization. Hierarchical summarization involves recursively summarizing blocks to obtain a higher-level summary, while recurrent summarization iteratively updates the summary as new blocks are processed. The user preference summary, along with recent interactions and candidate item information, is then used to train an LLM-based recommender using Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA) for efficient fine-tuning. The framework is evaluated on two public datasets: Amazon-M2 for product recommendation and MIND for news recommendation. The results show that the proposed method outperforms existing approaches in terms of recommendation accuracy. The paper also discusses the impact of the number of historical items and parameter size on model performance, highlighting the effectiveness of the proposed summarization techniques in handling text-rich sequential recommendation scenarios.