Attributions toward Artificial Agents in a modified Moral Turing Test

Attributions toward Artificial Agents in a modified Moral Turing Test

March 22, 2024 | Eyal Aharoni*,1-3, Sharlene Fernandes1, Daniel J. Brady1, Caelan Alexander1, Michael Criner1, Kara Queen1, Javier Rando4, Eddy Nahmias2,3, & Victor Crespo5
This study investigates whether people can distinguish between moral evaluations generated by an advanced AI language model (GPT-4) and those by humans. The researchers conducted a modified Moral Turing Test (m-MTT) to assess participants' ability to identify the source of moral evaluations. Participants rated the quality of the evaluations and were then asked to guess the source of each evaluation. The results showed that participants rated the AI's moral reasoning as superior in quality, but they performed significantly above chance levels in identifying the source of the evaluations. This suggests that the AI's perceived superiority in moral reasoning may be a key factor in people's ability to distinguish between human and AI-generated evaluations. The study highlights the need for safeguards around generative language models to prevent potentially harmful moral guidance from being accepted uncritically. The findings also indicate that the AI's sophisticated moral reasoning may be perceived as superior, even though it does not pass the traditional MTT. This raises concerns about the potential risks of relying on AI for moral advice and the need for further research to understand how people perceive and interact with AI in moral domains.This study investigates whether people can distinguish between moral evaluations generated by an advanced AI language model (GPT-4) and those by humans. The researchers conducted a modified Moral Turing Test (m-MTT) to assess participants' ability to identify the source of moral evaluations. Participants rated the quality of the evaluations and were then asked to guess the source of each evaluation. The results showed that participants rated the AI's moral reasoning as superior in quality, but they performed significantly above chance levels in identifying the source of the evaluations. This suggests that the AI's perceived superiority in moral reasoning may be a key factor in people's ability to distinguish between human and AI-generated evaluations. The study highlights the need for safeguards around generative language models to prevent potentially harmful moral guidance from being accepted uncritically. The findings also indicate that the AI's sophisticated moral reasoning may be perceived as superior, even though it does not pass the traditional MTT. This raises concerns about the potential risks of relying on AI for moral advice and the need for further research to understand how people perceive and interact with AI in moral domains.
Reach us at info@study.space
Understanding Attributions toward artificial agents in a modified Moral Turing Test