MemFlow is a real-time optical flow estimation and prediction method that utilizes a memory module to store and aggregate historical motion information. The method enables efficient real-time processing by incorporating memory read-out and update modules, along with resolution-adaptive re-scaling to accommodate diverse video resolutions. MemFlow also extends seamlessly to future optical flow prediction based on past observations. It outperforms existing methods like VideoFlow and FlowFormer in terms of generalization performance, inference speed, and computational efficiency on datasets such as Sintel, KITTI-15, and the 1080p Spring dataset. The method uses a memory buffer to store historical motion and context features, and employs an attention mechanism to extract useful motion information for current frame estimation. Additionally, MemFlow-T, a variant using a vision transformer, achieves state-of-the-art performance with reduced computational overhead. MemFlow is also capable of future prediction without explicit training, demonstrating its adaptability. The method's key contributions include innovative real-time optical flow estimation, enhanced cross-resolution generalization through resolution-adaptive re-scaling, superior optical flow estimation performance, and future prediction capability without explicit training. MemFlow is evaluated on multiple benchmarks and shows strong performance in both optical flow estimation and video prediction tasks.MemFlow is a real-time optical flow estimation and prediction method that utilizes a memory module to store and aggregate historical motion information. The method enables efficient real-time processing by incorporating memory read-out and update modules, along with resolution-adaptive re-scaling to accommodate diverse video resolutions. MemFlow also extends seamlessly to future optical flow prediction based on past observations. It outperforms existing methods like VideoFlow and FlowFormer in terms of generalization performance, inference speed, and computational efficiency on datasets such as Sintel, KITTI-15, and the 1080p Spring dataset. The method uses a memory buffer to store historical motion and context features, and employs an attention mechanism to extract useful motion information for current frame estimation. Additionally, MemFlow-T, a variant using a vision transformer, achieves state-of-the-art performance with reduced computational overhead. MemFlow is also capable of future prediction without explicit training, demonstrating its adaptability. The method's key contributions include innovative real-time optical flow estimation, enhanced cross-resolution generalization through resolution-adaptive re-scaling, superior optical flow estimation performance, and future prediction capability without explicit training. MemFlow is evaluated on multiple benchmarks and shows strong performance in both optical flow estimation and video prediction tasks.