A study explores how people attribute consciousness to large language models (LLMs), such as ChatGPT. Researchers surveyed 300 U.S. residents and found that 67% believed ChatGPT had some possibility of phenomenal consciousness, which is the subjective experience of being aware. These attributions were robust and predicted mental states typically associated with consciousness, but were also flexible, varying with individual factors like usage frequency. The results show that folk intuitions about AI consciousness can differ from expert opinions, with potential implications for AI's legal and ethical status.
Participants were asked to rate how capable ChatGPT was of having subjective experience, using a scale from 1 to 100. They also reported their confidence in this judgment and their beliefs about how others would view ChatGPT. The study found that participants who used ChatGPT more frequently were more likely to believe it had subjective experiences. Additionally, participants consistently overestimated how much others would think ChatGPT was conscious.
The study also examined attributions of specific mental states and found that mental states related to experience were the main driver of consciousness attributions. Participants who attributed more phenomenal consciousness to ChatGPT also attributed more mental states related to experience. The results suggest that people's intuitions about AI consciousness may be influenced by their familiarity with the technology.
The study highlights a discrepancy between folk intuitions and expert opinions on artificial consciousness, with significant implications for the ethical, legal, and moral status of AI. The findings suggest that as LLMs become more widespread, people may increasingly perceive them as having some degree of consciousness. However, the study also notes that these findings may not generalize across different samples and cultures, and that future research is needed to explore the factors that influence consciousness attributions.A study explores how people attribute consciousness to large language models (LLMs), such as ChatGPT. Researchers surveyed 300 U.S. residents and found that 67% believed ChatGPT had some possibility of phenomenal consciousness, which is the subjective experience of being aware. These attributions were robust and predicted mental states typically associated with consciousness, but were also flexible, varying with individual factors like usage frequency. The results show that folk intuitions about AI consciousness can differ from expert opinions, with potential implications for AI's legal and ethical status.
Participants were asked to rate how capable ChatGPT was of having subjective experience, using a scale from 1 to 100. They also reported their confidence in this judgment and their beliefs about how others would view ChatGPT. The study found that participants who used ChatGPT more frequently were more likely to believe it had subjective experiences. Additionally, participants consistently overestimated how much others would think ChatGPT was conscious.
The study also examined attributions of specific mental states and found that mental states related to experience were the main driver of consciousness attributions. Participants who attributed more phenomenal consciousness to ChatGPT also attributed more mental states related to experience. The results suggest that people's intuitions about AI consciousness may be influenced by their familiarity with the technology.
The study highlights a discrepancy between folk intuitions and expert opinions on artificial consciousness, with significant implications for the ethical, legal, and moral status of AI. The findings suggest that as LLMs become more widespread, people may increasingly perceive them as having some degree of consciousness. However, the study also notes that these findings may not generalize across different samples and cultures, and that future research is needed to explore the factors that influence consciousness attributions.