2 February 2024 | Jonathan A. Saenger · Jonathan Hunger · Andreas Boss · Johannes Richter
A 63-year-old man with neurological symptoms was misdiagnosed by ChatGPT, leading to a delayed treatment and a potentially life-threatening situation. The patient used ChatGPT to evaluate his symptoms after consulting with his interventionist, who had classified the symptoms as harmless. ChatGPT classified "vision problems" as "possible" after pulmonary vein isolation, leading the patient to stay at home. When a third episode occurred, the patient called an ambulance. In the emergency department, the neurological examination was unremarkable except for mild symptoms. Emergency CT and MRI showed no signs of acute infarction, but the patient was admitted to the stroke unit. The working diagnosis was changed to TIA due to risk factors and the fact that the patient was taking rivaroxaban. The anticoagulant was switched to apixaban, and other medications were added. The patient was discharged without residual neurological deficits.
This case highlights the potential of AI as a valuable tool but also the risks of blind reliance on it. Although not specifically designed for medical advice, ChatGPT answered all questions to the patient's satisfaction, which may be due to satisfaction bias. The patient was relieved by ChatGPT's answers and did not seek further clarification. Rephrasing the patient's questions could have resulted in a more accurate response. ChatGPT's training data may be outdated and unaware of recent medical advancements, making it susceptible to errors. Unmoderated communication with a chatbot may yield less than optimal results, as patients may need help to discern which symptoms are relevant to the diagnostic process. Chatbots may struggle to exclude certain diagnoses when presented with nonspecific symptoms, potentially leading to an increased burden on the healthcare system. AI should complement, rather than replace healthcare professionals to ensure a safer and more effective hospital environment. Medical professionals must be actively involved in developing medical AI systems. The challenge for the future is to determine how AI can enhance medical consultations in a safe, effective, and ethical manner.A 63-year-old man with neurological symptoms was misdiagnosed by ChatGPT, leading to a delayed treatment and a potentially life-threatening situation. The patient used ChatGPT to evaluate his symptoms after consulting with his interventionist, who had classified the symptoms as harmless. ChatGPT classified "vision problems" as "possible" after pulmonary vein isolation, leading the patient to stay at home. When a third episode occurred, the patient called an ambulance. In the emergency department, the neurological examination was unremarkable except for mild symptoms. Emergency CT and MRI showed no signs of acute infarction, but the patient was admitted to the stroke unit. The working diagnosis was changed to TIA due to risk factors and the fact that the patient was taking rivaroxaban. The anticoagulant was switched to apixaban, and other medications were added. The patient was discharged without residual neurological deficits.
This case highlights the potential of AI as a valuable tool but also the risks of blind reliance on it. Although not specifically designed for medical advice, ChatGPT answered all questions to the patient's satisfaction, which may be due to satisfaction bias. The patient was relieved by ChatGPT's answers and did not seek further clarification. Rephrasing the patient's questions could have resulted in a more accurate response. ChatGPT's training data may be outdated and unaware of recent medical advancements, making it susceptible to errors. Unmoderated communication with a chatbot may yield less than optimal results, as patients may need help to discern which symptoms are relevant to the diagnostic process. Chatbots may struggle to exclude certain diagnoses when presented with nonspecific symptoms, potentially leading to an increased burden on the healthcare system. AI should complement, rather than replace healthcare professionals to ensure a safer and more effective hospital environment. Medical professionals must be actively involved in developing medical AI systems. The challenge for the future is to determine how AI can enhance medical consultations in a safe, effective, and ethical manner.