This paper explores the risks and vulnerabilities associated with large language model (LLM)-powered scientific agents, emphasizing the need for prioritizing safety over autonomy. It highlights the potential dangers of these agents in various scientific domains, including chemistry, biology, and physics, and their impact on the environment and human health. The paper proposes a triadic framework involving human regulation, agent alignment, and environmental feedback to mitigate these risks. It discusses the limitations and challenges in current safety measures and advocates for the development of improved models, robust benchmarks, and comprehensive regulations. The paper also reviews existing work on agent safety and highlights the need for a systematic approach to address broader safety concerns. Finally, it emphasizes the importance of balancing autonomy with security to ensure the safe and ethical use of scientific agents.This paper explores the risks and vulnerabilities associated with large language model (LLM)-powered scientific agents, emphasizing the need for prioritizing safety over autonomy. It highlights the potential dangers of these agents in various scientific domains, including chemistry, biology, and physics, and their impact on the environment and human health. The paper proposes a triadic framework involving human regulation, agent alignment, and environmental feedback to mitigate these risks. It discusses the limitations and challenges in current safety measures and advocates for the development of improved models, robust benchmarks, and comprehensive regulations. The paper also reviews existing work on agent safety and highlights the need for a systematic approach to address broader safety concerns. Finally, it emphasizes the importance of balancing autonomy with security to ensure the safe and ethical use of scientific agents.