This paper investigates the performance of the Selective State Space Model (Mamba) in lifelong sequential recommendation, focusing on sequences of length 2k. The authors introduce RecMamba, a framework that integrates the Mamba block to model user preferences over time. Extensive experiments on two real-world datasets, KuaRand and LFM-1b, demonstrate that RecMamba achieves superior performance compared to representative models like SASRec, while significantly reducing training duration by approximately 70% and memory costs by 80%. The study highlights the effectiveness of longer sequences in improving recommendation performance and the efficiency of RecMamba in handling large-scale user interaction data. Future work includes refining Mamba for multi-behavior recommendations and improving handling of longer side information.This paper investigates the performance of the Selective State Space Model (Mamba) in lifelong sequential recommendation, focusing on sequences of length 2k. The authors introduce RecMamba, a framework that integrates the Mamba block to model user preferences over time. Extensive experiments on two real-world datasets, KuaRand and LFM-1b, demonstrate that RecMamba achieves superior performance compared to representative models like SASRec, while significantly reducing training duration by approximately 70% and memory costs by 80%. The study highlights the effectiveness of longer sequences in improving recommendation performance and the efficiency of RecMamba in handling large-scale user interaction data. Future work includes refining Mamba for multi-behavior recommendations and improving handling of longer side information.