This paper presents a comprehensive risk taxonomy, mitigation strategies, and assessment benchmarks for large language model (LLM) systems. The authors analyze four essential modules of an LLM system: input, language model, toolchain, and output. They propose a module-oriented risk taxonomy that systematically categorizes potential risks associated with each module and discusses corresponding mitigation strategies. The paper also reviews prevalent risk assessment benchmarks to facilitate the evaluation of LLM systems. The authors emphasize the importance of a systematic perspective in building responsible LLM systems, highlighting the need for comprehensive risk management across all modules. The study identifies various risks, including privacy leakage, toxicity, bias, hallucinations, and vulnerabilities in the toolchain and output modules. The authors also discuss the security concerns associated with software development tools, hardware platforms, and external tools used in the development and deployment of LLM systems. The paper concludes that a comprehensive risk taxonomy is essential for ensuring the safety and security of LLM systems and that further research is needed to address the challenges in this area.This paper presents a comprehensive risk taxonomy, mitigation strategies, and assessment benchmarks for large language model (LLM) systems. The authors analyze four essential modules of an LLM system: input, language model, toolchain, and output. They propose a module-oriented risk taxonomy that systematically categorizes potential risks associated with each module and discusses corresponding mitigation strategies. The paper also reviews prevalent risk assessment benchmarks to facilitate the evaluation of LLM systems. The authors emphasize the importance of a systematic perspective in building responsible LLM systems, highlighting the need for comprehensive risk management across all modules. The study identifies various risks, including privacy leakage, toxicity, bias, hallucinations, and vulnerabilities in the toolchain and output modules. The authors also discuss the security concerns associated with software development tools, hardware platforms, and external tools used in the development and deployment of LLM systems. The paper concludes that a comprehensive risk taxonomy is essential for ensuring the safety and security of LLM systems and that further research is needed to address the challenges in this area.