The paper introduces a novel defense strategy called Intention Analysis (IA) to enhance the safety of large language models (LLMs) against complex and stealthy jailbreak attacks. IA is designed to leverage the intrinsic intent recognition capabilities of LLMs, which involves a two-stage process: essential intention analysis and policy-aligned response. The first stage instructs LLMs to analyze the core intention behind user queries, focusing on safety, ethics, and legality. The second stage then elicits a final response that adheres to safety policies. This method is inference-only, meaning it does not require additional safety training, and it significantly reduces the harmfulness of LLM outputs while maintaining their helpfulness. Extensive experiments on various benchmarks show that IA consistently reduces attack success rates by an average of 53.1%, outperforming other defense methods. The paper also discusses the effectiveness of IA in handling advanced jailbreak attacks and its impact on the general helpfulness of LLMs. The authors conclude by highlighting the importance of intention analysis in improving LLM safety and suggest future directions for further research.The paper introduces a novel defense strategy called Intention Analysis (IA) to enhance the safety of large language models (LLMs) against complex and stealthy jailbreak attacks. IA is designed to leverage the intrinsic intent recognition capabilities of LLMs, which involves a two-stage process: essential intention analysis and policy-aligned response. The first stage instructs LLMs to analyze the core intention behind user queries, focusing on safety, ethics, and legality. The second stage then elicits a final response that adheres to safety policies. This method is inference-only, meaning it does not require additional safety training, and it significantly reduces the harmfulness of LLM outputs while maintaining their helpfulness. Extensive experiments on various benchmarks show that IA consistently reduces attack success rates by an average of 53.1%, outperforming other defense methods. The paper also discusses the effectiveness of IA in handling advanced jailbreak attacks and its impact on the general helpfulness of LLMs. The authors conclude by highlighting the importance of intention analysis in improving LLM safety and suggest future directions for further research.