Addressing 6 challenges in generative AI for digital health: A scoping review

Addressing 6 challenges in generative AI for digital health: A scoping review

May 23, 2024 | Tara Templin, Monika W. Perez, Sean Sylvia, Jeff Leak, Nasa Sinnott-Armstrong
This scoping review identifies six key challenges in generative AI for digital health and explores potential solutions. Generative AI can exhibit biases, compromise data privacy, misinterpret prompts, and produce hallucinations. Despite its potential, practitioners must understand these tools and their limitations. The review analyzed 120 articles published by March 2024, focusing on challenges in medical settings and potential solutions. Key challenges include bias, privacy, hallucination, and regulatory compliance. Other concerns, such as overreliance on text models, adversarial misprompting, and jailbreaking, are less commonly evaluated in the literature. Challenge 1: Generative AI models are biased. Bias in machine learning can lead to discriminatory and flawed medical recommendations. Techniques such as debiasing, reweighting, and incorporating human feedback are used to mitigate bias, though effectiveness remains debated. Challenge 2: Generative AI can compromise data privacy. Third-party tools raise ethical and regulatory concerns. Localized hosting and lightweight models offer potential solutions to enhance privacy. Challenge 3: Generative AI misunderstands prompts. Effective prompting is crucial, and practitioners should understand heuristics for crafting effective prompts. Jailbreaking is a concern, and mitigating it requires expert oversight. Challenge 4: Generative AI hallucinates. Hallucinations can lead to inaccurate or nonsensical outputs. External reviews by experts are recommended, and adjusting model parameters can help reduce hallucinations. Challenge 5: Most generative AI development is focused on language models. Medical practice involves diverse data types, and alternative models may be better suited for medical applications such as imaging or drug discovery. Challenge 6: Generative AI systems are dynamic. These systems adapt and make decisions based on experiences, requiring ongoing evaluation to ensure safety and effectiveness. Regulatory compliance is crucial for medical devices. The review emphasizes the need for diverse data sets, robust fairness evaluations, and interdisciplinary collaboration to address these challenges. It also highlights the importance of regulation, transparency, and ethical considerations in the development and use of generative AI in digital health. While much AI focuses on language models, there is significant potential in non-text data applications. Renewed attention on regulation will clarify appropriate use within clinical practice and encourage innovations around HIPAA-compliant synthetic data. Digital health technologies will likely improve by understanding the perceptions of challenges in the field and collecting solutions from digital health practitioners and interdisciplinary collaborators.This scoping review identifies six key challenges in generative AI for digital health and explores potential solutions. Generative AI can exhibit biases, compromise data privacy, misinterpret prompts, and produce hallucinations. Despite its potential, practitioners must understand these tools and their limitations. The review analyzed 120 articles published by March 2024, focusing on challenges in medical settings and potential solutions. Key challenges include bias, privacy, hallucination, and regulatory compliance. Other concerns, such as overreliance on text models, adversarial misprompting, and jailbreaking, are less commonly evaluated in the literature. Challenge 1: Generative AI models are biased. Bias in machine learning can lead to discriminatory and flawed medical recommendations. Techniques such as debiasing, reweighting, and incorporating human feedback are used to mitigate bias, though effectiveness remains debated. Challenge 2: Generative AI can compromise data privacy. Third-party tools raise ethical and regulatory concerns. Localized hosting and lightweight models offer potential solutions to enhance privacy. Challenge 3: Generative AI misunderstands prompts. Effective prompting is crucial, and practitioners should understand heuristics for crafting effective prompts. Jailbreaking is a concern, and mitigating it requires expert oversight. Challenge 4: Generative AI hallucinates. Hallucinations can lead to inaccurate or nonsensical outputs. External reviews by experts are recommended, and adjusting model parameters can help reduce hallucinations. Challenge 5: Most generative AI development is focused on language models. Medical practice involves diverse data types, and alternative models may be better suited for medical applications such as imaging or drug discovery. Challenge 6: Generative AI systems are dynamic. These systems adapt and make decisions based on experiences, requiring ongoing evaluation to ensure safety and effectiveness. Regulatory compliance is crucial for medical devices. The review emphasizes the need for diverse data sets, robust fairness evaluations, and interdisciplinary collaboration to address these challenges. It also highlights the importance of regulation, transparency, and ethical considerations in the development and use of generative AI in digital health. While much AI focuses on language models, there is significant potential in non-text data applications. Renewed attention on regulation will clarify appropriate use within clinical practice and encourage innovations around HIPAA-compliant synthetic data. Digital health technologies will likely improve by understanding the perceptions of challenges in the field and collecting solutions from digital health practitioners and interdisciplinary collaborators.
Reach us at info@study.space
[slides and audio] Addressing 6 challenges in generative AI for digital health%3A A scoping review