Artificial Hallucinations in ChatGPT: Implications in Scientific Writing

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing

02/19/2023 | Hussam Alkaissi, Samy I. McFarlane
ChatGPT, a large language model introduced by OpenAI, has significant implications for scientific writing. This study evaluates ChatGPT's ability to generate accurate scientific content, focusing on two medical cases: homocystinuria-associated osteoporosis and late-onset Pompe disease (LOPD). The researchers tested ChatGPT's performance in writing about the pathogenesis of these conditions and found both strengths and weaknesses. ChatGPT produced a paragraph on homocystinuria-induced osteoporosis that touched on key aspects, but when asked for references, it provided non-existent or unrelated citations. Similarly, ChatGPT generated an essay on liver involvement in LOPD, a topic not previously reported in the literature. The study highlights the potential of ChatGPT in academic writing, particularly in synthesizing literature reviews and managing references. However, it also raises concerns about the accuracy and reliability of AI-generated content. ChatGPT can produce seemingly realistic text, but it may generate "artificial hallucinations," which are false or misleading information. The study emphasizes the need for rigorous evaluation of AI-generated content in scientific writing and suggests that journals and conferences should implement policies to ensure the integrity of scientific manuscripts. The use of AI in scientific writing remains controversial, with some viewing it as a useful tool and others as a threat to authorship integrity. The study concludes that while ChatGPT can assist in academic writing, its outputs must be critically evaluated to maintain scientific standards.ChatGPT, a large language model introduced by OpenAI, has significant implications for scientific writing. This study evaluates ChatGPT's ability to generate accurate scientific content, focusing on two medical cases: homocystinuria-associated osteoporosis and late-onset Pompe disease (LOPD). The researchers tested ChatGPT's performance in writing about the pathogenesis of these conditions and found both strengths and weaknesses. ChatGPT produced a paragraph on homocystinuria-induced osteoporosis that touched on key aspects, but when asked for references, it provided non-existent or unrelated citations. Similarly, ChatGPT generated an essay on liver involvement in LOPD, a topic not previously reported in the literature. The study highlights the potential of ChatGPT in academic writing, particularly in synthesizing literature reviews and managing references. However, it also raises concerns about the accuracy and reliability of AI-generated content. ChatGPT can produce seemingly realistic text, but it may generate "artificial hallucinations," which are false or misleading information. The study emphasizes the need for rigorous evaluation of AI-generated content in scientific writing and suggests that journals and conferences should implement policies to ensure the integrity of scientific manuscripts. The use of AI in scientific writing remains controversial, with some viewing it as a useful tool and others as a threat to authorship integrity. The study concludes that while ChatGPT can assist in academic writing, its outputs must be critically evaluated to maintain scientific standards.
Reach us at info@study.space
[slides] Artificial Hallucinations in ChatGPT%3A Implications in Scientific Writing | StudySpace