The paper introduces Adaptive Token Biaser (ATBIAS), a new decoding technique designed to enhance in-context editing (ICE) in large language models (LLMs). ATBIAS focuses on biasing the logits of tokens most related to knowledge during decoding, by matching key entities associated with new and parametric knowledge. This approach significantly improves ICE performance, achieving up to a 32.3% improvement over state-of-the-art methods while incurring only half the latency. ATBIAS is effective in editing stubborn knowledge, which is difficult to change due to strong pre-trained confidence. The method is evaluated on various datasets and models, demonstrating its robustness and efficiency. ATBIAS operates on a small number of key tokens, reducing the risk of introducing errors and making it suitable for real-world applications with negligible additional cost. The paper also includes an ablation study to validate the effectiveness of each component of ATBIAS.The paper introduces Adaptive Token Biaser (ATBIAS), a new decoding technique designed to enhance in-context editing (ICE) in large language models (LLMs). ATBIAS focuses on biasing the logits of tokens most related to knowledge during decoding, by matching key entities associated with new and parametric knowledge. This approach significantly improves ICE performance, achieving up to a 32.3% improvement over state-of-the-art methods while incurring only half the latency. ATBIAS is effective in editing stubborn knowledge, which is difficult to change due to strong pre-trained confidence. The method is evaluated on various datasets and models, demonstrating its robustness and efficiency. ATBIAS operates on a small number of key tokens, reducing the risk of introducing errors and making it suitable for real-world applications with negligible additional cost. The paper also includes an ablation study to validate the effectiveness of each component of ATBIAS.