September 6, 2024 | Alaa Youssef, PhD; Ariadne A. Nichol, BA; Nicole Martinez-Martin, JD, PhD; David B. Larson, MD, MBA; Michael Abramoff, MD; Risa M. Wolf, MD; Danton Char, MD, MS
This study explores the ethical considerations in the design and conduct of clinical trials for artificial intelligence (AI) in clinical settings, focusing on diabetic retinopathy (DR) screening. The research aimed to determine the generalizability of the 7 ethical principles for clinical trials endorsed by the National Institute of Health (NIH) and identify unique ethical concerns in AI clinical trials. The study involved 11 investigators engaged in AI clinical trials for DR screening, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes identified include difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across patient subgroups, and addressing complex informed consent processes.
Participants highlighted unique ethical challenges specific to AI trials, such as defining and quantifying social value, ensuring equitable access to care, and navigating the complexities of informed consent. The study also identified novel ethical considerations, including whose values are prioritized in AI systems, whether AI can enhance clinical workflows without compromising patient safety, balancing economic incentives with ethical obligations, and the ethical implications of expanding DR screening without improving treatment access.
The study found that while the NIH's 7 ethical principles are applicable to AI clinical trials, there are important areas of uncertainty, particularly regarding social value, scientific validity, fair participant selection, favorable risk-benefit ratio, and informed consent. The integration of AI into clinical workflows raises ethical tensions, including the need to ensure patient safety and the potential for AI to affect clinical workflows and system operations. The study also highlights the importance of addressing these ethical challenges in future iterations of ethical guidance for AI trials. The findings suggest that the concept of equipoise in clinical trials is more complex in AI interventions, as AI systems not only affect individual patient care but also integrate with and transform healthcare workflows and system operations. The study concludes that further guidance is needed to ensure that AI trials are responsive to clinical contexts and that ethical considerations are adequately addressed.This study explores the ethical considerations in the design and conduct of clinical trials for artificial intelligence (AI) in clinical settings, focusing on diabetic retinopathy (DR) screening. The research aimed to determine the generalizability of the 7 ethical principles for clinical trials endorsed by the National Institute of Health (NIH) and identify unique ethical concerns in AI clinical trials. The study involved 11 investigators engaged in AI clinical trials for DR screening, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes identified include difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across patient subgroups, and addressing complex informed consent processes.
Participants highlighted unique ethical challenges specific to AI trials, such as defining and quantifying social value, ensuring equitable access to care, and navigating the complexities of informed consent. The study also identified novel ethical considerations, including whose values are prioritized in AI systems, whether AI can enhance clinical workflows without compromising patient safety, balancing economic incentives with ethical obligations, and the ethical implications of expanding DR screening without improving treatment access.
The study found that while the NIH's 7 ethical principles are applicable to AI clinical trials, there are important areas of uncertainty, particularly regarding social value, scientific validity, fair participant selection, favorable risk-benefit ratio, and informed consent. The integration of AI into clinical workflows raises ethical tensions, including the need to ensure patient safety and the potential for AI to affect clinical workflows and system operations. The study also highlights the importance of addressing these ethical challenges in future iterations of ethical guidance for AI trials. The findings suggest that the concept of equipoise in clinical trials is more complex in AI interventions, as AI systems not only affect individual patient care but also integrate with and transform healthcare workflows and system operations. The study concludes that further guidance is needed to ensure that AI trials are responsive to clinical contexts and that ethical considerations are adequately addressed.