2024 | M.M. Raza, Kaushik P. Venkatesh, Joseph C. Kvedar
Generative AI and large language models (LLMs) are increasingly being applied in healthcare, particularly in processing electronic medical records (EMRs). These models, trained on vast amounts of data, aim to simulate human conversation and generate new content, such as text, images, or music. Recent reviews highlight the potential of generative AI in healthcare, including improved predictive performance, simpler model development, and lower deployment costs. However, challenges remain, such as limited generalizability, data privacy issues, and the risk of "hallucination" – generating false information when insufficient data is available.
Wornow et al. conducted a comprehensive review of 84 foundation models trained on clinical structured text data from EMRs. They distinguished between clinical language models, which process clinical text, and EMR models, which generate machine-understandable representations of patients. Both types of models show promise, but current applications are limited by data privacy concerns and the lack of generalizability across different EMR systems.
To address these issues, Wornow et al. propose an evaluation framework for generative AI models in healthcare, focusing on six criteria: predictive performance, data labeling, model deployment, emergent clinical applications, multimodality, and novel human-AI interfaces. This framework helps health systems assess the clinical value of generative AI models.
Recent developments show that companies like Microsoft and Oracle Cerner are integrating generative AI into their EHR systems to automate tasks such as note-taking and medication ordering. However, the successful implementation of these tools requires leadership, incentives, and regulation. Leadership is needed to guide model development, validation, and implementation, while continued regulation is essential to balance the interests of developers, healthcare systems, payers, and patients.
Payer incentives are also crucial for widespread adoption, as generative AI tools may be considered capital expenses. With proper leadership, incentives, and regulation, generative AI in healthcare can be implemented effectively. The article emphasizes the need for a coordinated approach to ensure the safe and effective integration of generative AI into healthcare systems.Generative AI and large language models (LLMs) are increasingly being applied in healthcare, particularly in processing electronic medical records (EMRs). These models, trained on vast amounts of data, aim to simulate human conversation and generate new content, such as text, images, or music. Recent reviews highlight the potential of generative AI in healthcare, including improved predictive performance, simpler model development, and lower deployment costs. However, challenges remain, such as limited generalizability, data privacy issues, and the risk of "hallucination" – generating false information when insufficient data is available.
Wornow et al. conducted a comprehensive review of 84 foundation models trained on clinical structured text data from EMRs. They distinguished between clinical language models, which process clinical text, and EMR models, which generate machine-understandable representations of patients. Both types of models show promise, but current applications are limited by data privacy concerns and the lack of generalizability across different EMR systems.
To address these issues, Wornow et al. propose an evaluation framework for generative AI models in healthcare, focusing on six criteria: predictive performance, data labeling, model deployment, emergent clinical applications, multimodality, and novel human-AI interfaces. This framework helps health systems assess the clinical value of generative AI models.
Recent developments show that companies like Microsoft and Oracle Cerner are integrating generative AI into their EHR systems to automate tasks such as note-taking and medication ordering. However, the successful implementation of these tools requires leadership, incentives, and regulation. Leadership is needed to guide model development, validation, and implementation, while continued regulation is essential to balance the interests of developers, healthcare systems, payers, and patients.
Payer incentives are also crucial for widespread adoption, as generative AI tools may be considered capital expenses. With proper leadership, incentives, and regulation, generative AI in healthcare can be implemented effectively. The article emphasizes the need for a coordinated approach to ensure the safe and effective integration of generative AI into healthcare systems.