The paper introduces a novel parameter-efficient fine-tuning approach called Representation Editing (RED) for neural models. RED modifies the representations generated at certain layers through scaling and biasing operations, significantly reducing the number of trainable parameters compared to full parameter fine-tuning and other PEFT methods like LoRA. Extensive experiments across various models (RoBERTa, GPT-2, T5, LLaMA-2) demonstrate that RED achieves comparable or superior performance while requiring only a fraction of the parameters. The method is efficient and effective, making it a promising strategy for large-scale neural models. The paper also includes a comprehensive ablation study to understand the impact of different components of RED on performance.The paper introduces a novel parameter-efficient fine-tuning approach called Representation Editing (RED) for neural models. RED modifies the representations generated at certain layers through scaling and biasing operations, significantly reducing the number of trainable parameters compared to full parameter fine-tuning and other PEFT methods like LoRA. Extensive experiments across various models (RoBERTa, GPT-2, T5, LLaMA-2) demonstrate that RED achieves comparable or superior performance while requiring only a fraction of the parameters. The method is efficient and effective, making it a promising strategy for large-scale neural models. The paper also includes a comprehensive ablation study to understand the impact of different components of RED on performance.