February 23, 2024 | Raj M. Ratwani, PhD; David W. Bates, MD; David C. Classen, MD
The article "Patient Safety and Artificial Intelligence in Clinical Care" by Raj M. Ratwani, PhD; David W. Bates, MD; and David C. Classen, MD, discusses the potential of artificial intelligence (AI) to enhance patient safety in healthcare settings. While AI can detect and predict conditions like sepsis, pressure ulcers, and adverse drug events, its misuse can lead to patient harm. For instance, a widely used AI system for sepsis detection only identified 7% of 2552 patients with sepsis, leading to delayed treatment.
To mitigate these risks, the authors emphasize the need for patient safety safeguards, including guidelines for safe AI implementation, frequent monitoring of AI for patient safety issues, and traceability of AI system contributions to patient safety events. They recommend developing guidelines based on human-systems integration principles, frequently testing AI systems for vulnerabilities, and capturing metadata to support detailed reviews of AI-related incidents.
The article also highlights the importance of aligning AI safety programs with existing frameworks and guidelines from federal agencies like the FDA and NIST. It suggests that federal and state programs may be necessary to support healthcare organizations in implementing these safeguards, especially given the rapid pace of AI development. The authors conclude that the executive order by President Biden, which calls for an AI safety program within 365 days, should serve as a catalyst for the rapid development of patient-focused safeguards.The article "Patient Safety and Artificial Intelligence in Clinical Care" by Raj M. Ratwani, PhD; David W. Bates, MD; and David C. Classen, MD, discusses the potential of artificial intelligence (AI) to enhance patient safety in healthcare settings. While AI can detect and predict conditions like sepsis, pressure ulcers, and adverse drug events, its misuse can lead to patient harm. For instance, a widely used AI system for sepsis detection only identified 7% of 2552 patients with sepsis, leading to delayed treatment.
To mitigate these risks, the authors emphasize the need for patient safety safeguards, including guidelines for safe AI implementation, frequent monitoring of AI for patient safety issues, and traceability of AI system contributions to patient safety events. They recommend developing guidelines based on human-systems integration principles, frequently testing AI systems for vulnerabilities, and capturing metadata to support detailed reviews of AI-related incidents.
The article also highlights the importance of aligning AI safety programs with existing frameworks and guidelines from federal agencies like the FDA and NIST. It suggests that federal and state programs may be necessary to support healthcare organizations in implementing these safeguards, especially given the rapid pace of AI development. The authors conclude that the executive order by President Biden, which calls for an AI safety program within 365 days, should serve as a catalyst for the rapid development of patient-focused safeguards.