A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare

A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare

20 February 2024 | Jana Fehr, Brian Citro, Rohit Malpani, Christoph Lippert and Vince I. Madai
The article "A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare" by Fehr et al. (2024) examines the transparency of 14 CE-certified AI-based radiology products in the EU, focusing on their development, validation, ethical considerations, and deployment caveats. The study uses a self-designed survey to assess the transparency of these products, scoring each question with a 0, 0.5, or 1 to indicate the availability of information. The results show that transparency scores range from 6.4% to 60.9%, with a median of 29.1%. Key gaps include missing documentation on training data, ethical considerations, and deployment limitations. Ethical aspects such as consent, safety monitoring, and GDPR compliance are rarely documented, and deployment caveats for different demographics and medical settings are scarce. The authors conclude that public documentation of authorized medical AI products in Europe lacks sufficient transparency to inform about safety and risks, and they call for lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to ensure trustworthy AI in healthcare.The article "A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare" by Fehr et al. (2024) examines the transparency of 14 CE-certified AI-based radiology products in the EU, focusing on their development, validation, ethical considerations, and deployment caveats. The study uses a self-designed survey to assess the transparency of these products, scoring each question with a 0, 0.5, or 1 to indicate the availability of information. The results show that transparency scores range from 6.4% to 60.9%, with a median of 29.1%. Key gaps include missing documentation on training data, ethical considerations, and deployment limitations. Ethical aspects such as consent, safety monitoring, and GDPR compliance are rarely documented, and deployment caveats for different demographics and medical settings are scarce. The authors conclude that public documentation of authorized medical AI products in Europe lacks sufficient transparency to inform about safety and risks, and they call for lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to ensure trustworthy AI in healthcare.
Reach us at info@study.space
Understanding A trustworthy AI reality-check%3A the lack of transparency of artificial intelligence products in healthcare