This paper provides a comprehensive survey of attacks targeting Large Language Models (LLMs), discussing their nature, mechanisms, potential impacts, and current defense strategies. The authors categorize the spectrum of attacks into white box and black box perspectives, exploring various methods such as adversarial attacks, data poisoning, and privacy concerns. They delve into specific attack techniques like jailbreaks, prompt injections, and data poisoning, detailing the effectiveness of different methodologies and the resilience of LLMs against these attacks. The paper also highlights the importance of understanding and protecting LLMs from complex security threats, emphasizing the need for robust defense mechanisms. Additionally, it discusses the challenges and future research directions, including real-time monitoring systems, multimodal approaches, benchmarking, and explainable LLMs. The authors aim to offer a nuanced understanding of LLM attacks, foster awareness within the AI community, and inspire robust solutions to mitigate these risks.This paper provides a comprehensive survey of attacks targeting Large Language Models (LLMs), discussing their nature, mechanisms, potential impacts, and current defense strategies. The authors categorize the spectrum of attacks into white box and black box perspectives, exploring various methods such as adversarial attacks, data poisoning, and privacy concerns. They delve into specific attack techniques like jailbreaks, prompt injections, and data poisoning, detailing the effectiveness of different methodologies and the resilience of LLMs against these attacks. The paper also highlights the importance of understanding and protecting LLMs from complex security threats, emphasizing the need for robust defense mechanisms. Additionally, it discusses the challenges and future research directions, including real-time monitoring systems, multimodal approaches, benchmarking, and explainable LLMs. The authors aim to offer a nuanced understanding of LLM attacks, foster awareness within the AI community, and inspire robust solutions to mitigate these risks.