This paper explores a novel attack scenario where attackers aim to mislead dense retrieval systems by injecting covert backdoors. Unlike previous methods that rely on model weights and generate conspicuous outputs, the proposed attack uses grammar errors as triggers to trigger the retrieval of attacker-specified content. The approach ensures that the models function normally for standard queries while covertly retrieving attacker-specified content when minor linguistic mistakes are made. The study demonstrates that contrastive loss is sensitive to grammatical errors, and hard negative sampling can exacerbate the vulnerability to backdoor attacks. The proposed method achieves a high attack success rate with a minimal corpus poisoning rate of 0.048%, preserving normal retrieval performance. Evaluations across three real-world defense strategies show that the malicious passages remain highly resistant to detection and filtering, highlighting the robustness and subtlety of the proposed attack.This paper explores a novel attack scenario where attackers aim to mislead dense retrieval systems by injecting covert backdoors. Unlike previous methods that rely on model weights and generate conspicuous outputs, the proposed attack uses grammar errors as triggers to trigger the retrieval of attacker-specified content. The approach ensures that the models function normally for standard queries while covertly retrieving attacker-specified content when minor linguistic mistakes are made. The study demonstrates that contrastive loss is sensitive to grammatical errors, and hard negative sampling can exacerbate the vulnerability to backdoor attacks. The proposed method achieves a high attack success rate with a minimal corpus poisoning rate of 0.048%, preserving normal retrieval performance. Evaluations across three real-world defense strategies show that the malicious passages remain highly resistant to detection and filtering, highlighting the robustness and subtlety of the proposed attack.