MEMORYLLM is a self-updatable large language model that integrates a transformer with a fixed-size memory pool within its latent space. This memory pool enables the model to efficiently incorporate new knowledge while retaining previous information. The model updates its memory pool using a self-update mechanism that selectively incorporates new knowledge, allowing it to gradually forget outdated information. Evaluations show that MEMORYLLM performs well in model editing, long-context tasks, and knowledge retention experiments. It maintains performance even after nearly a million memory updates, demonstrating robustness and integrity. The model is open-sourced and can be extended to handle longer contexts and multimodal inputs. MEMORYLLM addresses the challenge of updating large language models with new knowledge without degrading their performance, offering a scalable and efficient solution for continuous learning.MEMORYLLM is a self-updatable large language model that integrates a transformer with a fixed-size memory pool within its latent space. This memory pool enables the model to efficiently incorporate new knowledge while retaining previous information. The model updates its memory pool using a self-update mechanism that selectively incorporates new knowledge, allowing it to gradually forget outdated information. Evaluations show that MEMORYLLM performs well in model editing, long-context tasks, and knowledge retention experiments. It maintains performance even after nearly a million memory updates, demonstrating robustness and integrity. The model is open-sourced and can be extended to handle longer contexts and multimodal inputs. MEMORYLLM addresses the challenge of updating large language models with new knowledge without degrading their performance, offering a scalable and efficient solution for continuous learning.