This paper addresses the issue of model collapse during sequential editing using Rank-One Model Editing (ROME), a popular method for updating knowledge in large language models (LLMs). The authors identify that certain edits, known as disabling edits, cause immediate model collapse and limit the effectiveness of ROME for sequential editing. They attribute these disabling edits to irregularities in the implementation of ROME, specifically the asymmetric usage of key-vectors in the update equation. To address this, they propose a more stable implementation called r-ROME, which uses homogenous key-vectors, and demonstrate that it prevents model collapse while improving generalization and locality of edits. The paper also provides a detailed mathematical explanation for the causes of disabling edits and evaluates the performance of r-ROME on standard model editing metrics and downstream tasks, showing superior results compared to the original ROME implementation. The authors conclude that r-ROME enables stable and scalable sequential model editing, making it a valuable tool for large-scale knowledge editing in LLMs.This paper addresses the issue of model collapse during sequential editing using Rank-One Model Editing (ROME), a popular method for updating knowledge in large language models (LLMs). The authors identify that certain edits, known as disabling edits, cause immediate model collapse and limit the effectiveness of ROME for sequential editing. They attribute these disabling edits to irregularities in the implementation of ROME, specifically the asymmetric usage of key-vectors in the update equation. To address this, they propose a more stable implementation called r-ROME, which uses homogenous key-vectors, and demonstrate that it prevents model collapse while improving generalization and locality of edits. The paper also provides a detailed mathematical explanation for the causes of disabling edits and evaluates the performance of r-ROME on standard model editing metrics and downstream tasks, showing superior results compared to the original ROME implementation. The authors conclude that r-ROME enables stable and scalable sequential model editing, making it a valuable tool for large-scale knowledge editing in LLMs.