This paper addresses the issue of quantization-conditioned backdoor (QCB) attacks, which exploit the standard quantization process to implant dormant backdoors in neural networks. The authors identify that the activation of these backdoors is closely related to the nearest rounding operation in quantization, which introduces truncation errors. They propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), a defense mechanism that learns a non-nearest rounding strategy to disrupt the link between truncation errors and backdoor activation while preserving clean accuracy. Extensive evaluations on benchmark datasets demonstrate that EFRAP effectively mitigates state-of-the-art QCB attacks under various settings, outperforming existing defenses. The paper also includes a detailed analysis of the threat model, experimental setup, and ablation studies to validate the effectiveness and robustness of EFRAP.This paper addresses the issue of quantization-conditioned backdoor (QCB) attacks, which exploit the standard quantization process to implant dormant backdoors in neural networks. The authors identify that the activation of these backdoors is closely related to the nearest rounding operation in quantization, which introduces truncation errors. They propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), a defense mechanism that learns a non-nearest rounding strategy to disrupt the link between truncation errors and backdoor activation while preserving clean accuracy. Extensive evaluations on benchmark datasets demonstrate that EFRAP effectively mitigates state-of-the-art QCB attacks under various settings, outperforming existing defenses. The paper also includes a detailed analysis of the threat model, experimental setup, and ablation studies to validate the effectiveness and robustness of EFRAP.