This paper addresses the challenge of using Large Language Models (LLMs) for text-rich sequential recommendation, particularly in scenarios where items contain extensive textual information. The authors propose a novel framework called LLM-TRSR (Large Language Models for Text-Rich Sequential Recommendation) to overcome the limitations of LLMs in handling long texts and computational overheads. The framework involves segmenting user historical behaviors into manageable blocks, using an LLM-based summarizer to generate user preference summaries, and then fine-tuning an LLM-based recommender using Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) techniques. Two unique summarization techniques, hierarchical summarization and recurrent summarization, are introduced to effectively capture user preferences. The effectiveness of the proposed approach is demonstrated through experiments on two public datasets, the Amazon-M2 and MIND datasets, showing superior performance compared to baseline methods. The paper also discusses the impact of historical item numbers and LLM parameter sizes on model performance, providing insights into the optimal configurations for different scenarios.This paper addresses the challenge of using Large Language Models (LLMs) for text-rich sequential recommendation, particularly in scenarios where items contain extensive textual information. The authors propose a novel framework called LLM-TRSR (Large Language Models for Text-Rich Sequential Recommendation) to overcome the limitations of LLMs in handling long texts and computational overheads. The framework involves segmenting user historical behaviors into manageable blocks, using an LLM-based summarizer to generate user preference summaries, and then fine-tuning an LLM-based recommender using Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) techniques. Two unique summarization techniques, hierarchical summarization and recurrent summarization, are introduced to effectively capture user preferences. The effectiveness of the proposed approach is demonstrated through experiments on two public datasets, the Amazon-M2 and MIND datasets, showing superior performance compared to baseline methods. The paper also discusses the impact of historical item numbers and LLM parameter sizes on model performance, providing insights into the optimal configurations for different scenarios.