Model Editing by Standard Fine-Tuning

Model Editing by Standard Fine-Tuning

3 Jun 2024 | Govind Gangadhar and Karl Stratos
The paper "Model Editing by Standard Fine-Tuning" by Govind Gangadhar and Karl Stratos from Rutgers University explores the effectiveness of standard fine-tuning for model editing, a task aimed at altering a language model to incorporate new knowledge while preserving existing inferences. Traditional fine-tuning is often considered less effective than specialized methods due to its poor performance, but the authors argue that it is simple, architecture-agnostic, and can leverage advances in standard training techniques, such as parameter-efficient fine-tuning (PEFT). The authors propose two minor modifications to standard fine-tuning to improve its performance: 1. Optimizing the conditional likelihood rather than the full likelihood. 2. Augmenting the training data with random or similar unedited facts to encourage locality. These modifications are evaluated on the ZsRE and COUNTERFACT datasets, demonstrating that standard fine-tuning can match or outperform highly specialized editors in terms of edit score. The paper also discusses the trade-offs between efficacy, generalization, and locality, and compares their approach with existing model editors like MEND, ROME, MEMIT, and IKE. The results show that their method achieves competitive edit scores without the need for specialized adapters or layer selection, highlighting the potential of standard fine-tuning in model editing.The paper "Model Editing by Standard Fine-Tuning" by Govind Gangadhar and Karl Stratos from Rutgers University explores the effectiveness of standard fine-tuning for model editing, a task aimed at altering a language model to incorporate new knowledge while preserving existing inferences. Traditional fine-tuning is often considered less effective than specialized methods due to its poor performance, but the authors argue that it is simple, architecture-agnostic, and can leverage advances in standard training techniques, such as parameter-efficient fine-tuning (PEFT). The authors propose two minor modifications to standard fine-tuning to improve its performance: 1. Optimizing the conditional likelihood rather than the full likelihood. 2. Augmenting the training data with random or similar unedited facts to encourage locality. These modifications are evaluated on the ZsRE and COUNTERFACT datasets, demonstrating that standard fine-tuning can match or outperform highly specialized editors in terms of edit score. The paper also discusses the trade-offs between efficacy, generalization, and locality, and compares their approach with existing model editors like MEND, ROME, MEMIT, and IKE. The results show that their method achieves competitive edit scores without the need for specialized adapters or layer selection, highlighting the potential of standard fine-tuning in model editing.
Reach us at info@study.space