This paper provides a comprehensive survey of the safety of Multimodal Large Language Models (MLLMs) on images and texts. It begins by introducing the overview of MLLMs and the concept of safety, followed by a review of evaluation datasets and metrics used to measure MLLMs' safety. The paper then delves into attack and defense techniques, discussing various methods for inducing unsafe content and resisting such attacks. Finally, it analyzes unresolved issues and suggests future research directions. The authors highlight the unique risks associated with visual modality, the challenges in measuring safety levels, and the need for more comprehensive evaluation datasets and metrics. They also discuss the effectiveness of different attack methods, such as adversarial image construction and visual prompt injection, and the development of defense mechanisms, including inference-time and training-time alignments. The paper concludes with a discussion on reliable safety evaluation, in-depth study of safety risks, and the balance between safety and utility in MLLMs.This paper provides a comprehensive survey of the safety of Multimodal Large Language Models (MLLMs) on images and texts. It begins by introducing the overview of MLLMs and the concept of safety, followed by a review of evaluation datasets and metrics used to measure MLLMs' safety. The paper then delves into attack and defense techniques, discussing various methods for inducing unsafe content and resisting such attacks. Finally, it analyzes unresolved issues and suggests future research directions. The authors highlight the unique risks associated with visual modality, the challenges in measuring safety levels, and the need for more comprehensive evaluation datasets and metrics. They also discuss the effectiveness of different attack methods, such as adversarial image construction and visual prompt injection, and the development of defense mechanisms, including inference-time and training-time alignments. The paper concludes with a discussion on reliable safety evaluation, in-depth study of safety risks, and the balance between safety and utility in MLLMs.