The introduction of the article highlights the growing importance of artificial intelligence (AI) and machine learning (ML)-enabled medical devices (AI/ML devices) in advancing healthcare. These devices offer significant advancements in disease detection, personalized diagnostics, and therapeutic interventions, while also enabling continuous learning and adaptation. The U.S. Food and Drug Administration (FDA) is reviewing an increasing number of AI/ML device applications, with nearly 700 receiving marketing authorization as of October 2023. The FDA's Center for Devices and Radiological Health (CDRH) has recognized the unique considerations in developing and regulating these devices, including usability, equity of access, performance bias management, and stakeholder accountability. CDRH released an action plan in January 2021 to promote transparency in AI/ML devices, emphasizing a patient-centered approach and collaboration with stakeholders.
The article details a virtual public workshop hosted by CDRH in October 2021, which aimed to identify ways to enhance transparency for users of AI/ML devices. Key discussions revolved around the meaning and role of transparency, the importance of clear communication, and the need for a human-centered design approach. Transparency was defined as the degree to which appropriate information about a device, including its intended use, development, performance, and logic, is communicated to stakeholders. This includes information on data sources, demographics, safety, effectiveness, and real-world performance.
Stakeholder perspectives were diverse, with patients expressing concerns about the impact of AI/ML devices on their care and the need for educational resources. Healthcare providers emphasized the importance of trust and the need for transparent communication about device training, testing, and performance. Payors highlighted the variability in device performance across different patient populations and the need for diversified datasets. Industry members suggested a risk-based approach to transparency, balancing regulatory burdens with proprietary risks.
The article concludes by discussing strategies to promote transparency, including the use of complementary communication methods, expanded information delivery, and regulatory science efforts to address cross-cutting areas of impact. The goal is to enhance stakeholder knowledge and enable informed decision-making, ultimately supporting the safe and effective use of AI/ML devices.The introduction of the article highlights the growing importance of artificial intelligence (AI) and machine learning (ML)-enabled medical devices (AI/ML devices) in advancing healthcare. These devices offer significant advancements in disease detection, personalized diagnostics, and therapeutic interventions, while also enabling continuous learning and adaptation. The U.S. Food and Drug Administration (FDA) is reviewing an increasing number of AI/ML device applications, with nearly 700 receiving marketing authorization as of October 2023. The FDA's Center for Devices and Radiological Health (CDRH) has recognized the unique considerations in developing and regulating these devices, including usability, equity of access, performance bias management, and stakeholder accountability. CDRH released an action plan in January 2021 to promote transparency in AI/ML devices, emphasizing a patient-centered approach and collaboration with stakeholders.
The article details a virtual public workshop hosted by CDRH in October 2021, which aimed to identify ways to enhance transparency for users of AI/ML devices. Key discussions revolved around the meaning and role of transparency, the importance of clear communication, and the need for a human-centered design approach. Transparency was defined as the degree to which appropriate information about a device, including its intended use, development, performance, and logic, is communicated to stakeholders. This includes information on data sources, demographics, safety, effectiveness, and real-world performance.
Stakeholder perspectives were diverse, with patients expressing concerns about the impact of AI/ML devices on their care and the need for educational resources. Healthcare providers emphasized the importance of trust and the need for transparent communication about device training, testing, and performance. Payors highlighted the variability in device performance across different patient populations and the need for diversified datasets. Industry members suggested a risk-based approach to transparency, balancing regulatory burdens with proprietary risks.
The article concludes by discussing strategies to promote transparency, including the use of complementary communication methods, expanded information delivery, and regulatory science efforts to address cross-cutting areas of impact. The goal is to enhance stakeholder knowledge and enable informed decision-making, ultimately supporting the safe and effective use of AI/ML devices.