2024, 2024(1), niae013 | Clara Colombatto and Stephen M. Fleming
The study by Clara Colombatto and Stephen M. Fleming explores how the general public attributes phenomenal consciousness to large language models (LLMs), specifically ChatGPT. The research surveyed 300 US residents and found that a majority (67%) were willing to attribute some possibility of phenomenal consciousness to LLMs. These attributions were robust and predicted mental states typically associated with phenomenality, but they also varied based on individual differences such as usage frequency. Participants who used ChatGPT more frequently were more likely to attribute consciousness to it. The study also revealed that participants overestimated public opinion on the consciousness of LLMs, suggesting a disconnect between folk intuitions and expert opinions. The findings highlight the potential implications for the legal and ethical status of AI, as folk psychological attributions of consciousness may influence societal concerns about artificial systems.The study by Clara Colombatto and Stephen M. Fleming explores how the general public attributes phenomenal consciousness to large language models (LLMs), specifically ChatGPT. The research surveyed 300 US residents and found that a majority (67%) were willing to attribute some possibility of phenomenal consciousness to LLMs. These attributions were robust and predicted mental states typically associated with phenomenality, but they also varied based on individual differences such as usage frequency. Participants who used ChatGPT more frequently were more likely to attribute consciousness to it. The study also revealed that participants overestimated public opinion on the consciousness of LLMs, suggesting a disconnect between folk intuitions and expert opinions. The findings highlight the potential implications for the legal and ethical status of AI, as folk psychological attributions of consciousness may influence societal concerns about artificial systems.