Knowledge, Perceptions and Attitude of Researchers Towards Using ChatGPT in Research

Knowledge, Perceptions and Attitude of Researchers Towards Using ChatGPT in Research

27 February 2024 | Ahmed Samir Abdelhafiz, Asmaa Ali, Ayman Mohamed Maaly, Hany Hassan Ziady, Eman Anwar Sultan, Mohamed Anwar Mahgoub
This study explores the knowledge, perceptions, and attitudes of Egyptian researchers toward using ChatGPT and other chatbots in academic research. A survey of 200 researchers was conducted, revealing that 67% had heard of ChatGPT, but only 11.5% had used it in their research, primarily for rephrasing paragraphs and finding references. Over one-third supported the idea of listing ChatGPT as an author in scientific publications, while nearly half expressed ethical concerns about using AI in research. Concerns also arose regarding the potential for AI to automate tasks such as language editing, statistics, and data analysis. The study found that younger researchers were more likely to use ChatGPT, and those with prior familiarity with chatbots were more likely to adopt them. However, ethical and legal issues surrounding AI in research remain significant. The International Committee of Medical Journal Editors (ICMJE) guidelines state that chatbots like ChatGPT should not be listed as authors because they cannot be held accountable for the accuracy, integrity, and originality of the work. Similarly, the World Association of Medical Editors (WAME) recommends that chatbots should not be considered authors. The increasing use of chatbots in academic research necessitates thoughtful regulation that balances potential benefits with inherent limitations and potential risks. Chatbots should be viewed as assistants to researchers during manuscript preparation and review, not as authors. Researchers should be equipped with proper training to utilize chatbots and other AI tools effectively and ethically. The study highlights the need for responsible AI integration in research to ensure academic integrity and ethical standards.This study explores the knowledge, perceptions, and attitudes of Egyptian researchers toward using ChatGPT and other chatbots in academic research. A survey of 200 researchers was conducted, revealing that 67% had heard of ChatGPT, but only 11.5% had used it in their research, primarily for rephrasing paragraphs and finding references. Over one-third supported the idea of listing ChatGPT as an author in scientific publications, while nearly half expressed ethical concerns about using AI in research. Concerns also arose regarding the potential for AI to automate tasks such as language editing, statistics, and data analysis. The study found that younger researchers were more likely to use ChatGPT, and those with prior familiarity with chatbots were more likely to adopt them. However, ethical and legal issues surrounding AI in research remain significant. The International Committee of Medical Journal Editors (ICMJE) guidelines state that chatbots like ChatGPT should not be listed as authors because they cannot be held accountable for the accuracy, integrity, and originality of the work. Similarly, the World Association of Medical Editors (WAME) recommends that chatbots should not be considered authors. The increasing use of chatbots in academic research necessitates thoughtful regulation that balances potential benefits with inherent limitations and potential risks. Chatbots should be viewed as assistants to researchers during manuscript preparation and review, not as authors. Researchers should be equipped with proper training to utilize chatbots and other AI tools effectively and ethically. The study highlights the need for responsible AI integration in research to ensure academic integrity and ethical standards.
Reach us at info@study.space