This paper presents a comprehensive survey of attacks on Large Language Models (LLMs), discussing their mechanisms, impacts, and current defense strategies. It categorizes attacks into white-box and black-box types, focusing on jailbreaking, prompt injection, and data poisoning. The study highlights vulnerabilities such as adversarial inputs, data poisoning, and privacy risks, and explores effective attack methodologies and the resilience of LLMs against these threats. It also discusses mitigation strategies, including input/output filtering, guardrails, and robust training techniques. The paper emphasizes the importance of understanding and mitigating these attacks to ensure the security and trustworthiness of AI systems. It identifies key challenges and future research directions, including real-time monitoring systems, multimodal approaches, benchmarking, and explainable LLMs. The study underscores the need for ongoing research and interdisciplinary collaboration to address the evolving landscape of AI security.This paper presents a comprehensive survey of attacks on Large Language Models (LLMs), discussing their mechanisms, impacts, and current defense strategies. It categorizes attacks into white-box and black-box types, focusing on jailbreaking, prompt injection, and data poisoning. The study highlights vulnerabilities such as adversarial inputs, data poisoning, and privacy risks, and explores effective attack methodologies and the resilience of LLMs against these threats. It also discusses mitigation strategies, including input/output filtering, guardrails, and robust training techniques. The paper emphasizes the importance of understanding and mitigating these attacks to ensure the security and trustworthiness of AI systems. It identifies key challenges and future research directions, including real-time monitoring systems, multimodal approaches, benchmarking, and explainable LLMs. The study underscores the need for ongoing research and interdisciplinary collaboration to address the evolving landscape of AI security.