9 Oct 2024 | Akshat Gupta, Dev Sajnani, Gopala Anumanchipalli
This paper unifies two popular model editing techniques, ROME and MEMIT, under a single conceptual framework called the preservation-memorization objective. ROME and MEMIT are distinguished by their ability to perform batched edits, with ROME using an equality constraint and MEMIT employing a more flexible least-squares constraint. The authors generalize ROME to enable batched editing with equality constraints, introducing EMMET (Equality-constrained Mass Model Editing in Transformers). EMMET can perform batched edits up to a batch size of 10,000, achieving similar performance to MEMIT across multiple metrics. The paper highlights that both ROME and MEMIT are equivalent in terms of their optimization objective, abilities, performance, and limitations. It also disentangles the edit-distribution algorithm from MEMIT, allowing for a fair comparison between the two algorithms. Experiments on various models and datasets demonstrate that EMMET and MEMIT have similar performance and degradation patterns, suggesting that the equality constraint does not necessarily lead to better model editing accuracy. The unified framework and disentanglement of the edit distribution algorithm enable a deeper understanding and easier comparison of these model editing methods.This paper unifies two popular model editing techniques, ROME and MEMIT, under a single conceptual framework called the preservation-memorization objective. ROME and MEMIT are distinguished by their ability to perform batched edits, with ROME using an equality constraint and MEMIT employing a more flexible least-squares constraint. The authors generalize ROME to enable batched editing with equality constraints, introducing EMMET (Equality-constrained Mass Model Editing in Transformers). EMMET can perform batched edits up to a batch size of 10,000, achieving similar performance to MEMIT across multiple metrics. The paper highlights that both ROME and MEMIT are equivalent in terms of their optimization objective, abilities, performance, and limitations. It also disentangles the edit-distribution algorithm from MEMIT, allowing for a fair comparison between the two algorithms. Experiments on various models and datasets demonstrate that EMMET and MEMIT have similar performance and degradation patterns, suggesting that the equality constraint does not necessarily lead to better model editing accuracy. The unified framework and disentanglement of the edit distribution algorithm enable a deeper understanding and easier comparison of these model editing methods.