This paper introduces R2I, a novel model-based reinforcement learning (MBRL) method that integrates state space models (SSMs) into the world model of DreamerV3 to enhance long-term memory and credit assignment. R2I is designed to efficiently handle long-range dependencies and improve performance in memory-intensive tasks. The method leverages a variant of S4, a state space model that excels in capturing long-range dependencies, to enable the agent to recall past experiences and make informed decisions. R2I demonstrates state-of-the-art performance in challenging memory and credit assignment tasks such as BSuite and POPGym, and achieves superhuman performance in the complex Memory Maze domain. It also maintains strong performance in classic RL tasks like Atari and DMC, indicating its generalizability. R2I is faster than the state-of-the-art MBRL method, DreamerV3, resulting in faster convergence. The method's design allows for parallel computation during training and efficient inference, enabling quick generation of imagined trajectories. R2I's architecture includes a representation model, dynamics model, and sequence model, along with three prediction heads for observations, rewards, and episode continuation. The method's performance is evaluated across a range of domains, including memory-intensive tasks and non-memory tasks, demonstrating its effectiveness in various scenarios. The results show that R2I significantly improves memory capabilities while maintaining performance in other tasks, making it a promising approach for model-based reinforcement learning.This paper introduces R2I, a novel model-based reinforcement learning (MBRL) method that integrates state space models (SSMs) into the world model of DreamerV3 to enhance long-term memory and credit assignment. R2I is designed to efficiently handle long-range dependencies and improve performance in memory-intensive tasks. The method leverages a variant of S4, a state space model that excels in capturing long-range dependencies, to enable the agent to recall past experiences and make informed decisions. R2I demonstrates state-of-the-art performance in challenging memory and credit assignment tasks such as BSuite and POPGym, and achieves superhuman performance in the complex Memory Maze domain. It also maintains strong performance in classic RL tasks like Atari and DMC, indicating its generalizability. R2I is faster than the state-of-the-art MBRL method, DreamerV3, resulting in faster convergence. The method's design allows for parallel computation during training and efficient inference, enabling quick generation of imagined trajectories. R2I's architecture includes a representation model, dynamics model, and sequence model, along with three prediction heads for observations, rewards, and episode continuation. The method's performance is evaluated across a range of domains, including memory-intensive tasks and non-memory tasks, demonstrating its effectiveness in various scenarios. The results show that R2I significantly improves memory capabilities while maintaining performance in other tasks, making it a promising approach for model-based reinforcement learning.