LLMs Among Us: Generative AI Participating in Digital Discourse

LLMs Among Us: Generative AI Participating in Digital Discourse

8 Feb 2024 | Kristina Radivojevic, Nicholas Clark, Paul Brenner
The emergence of Large Language Models (LLMs) has the potential to reshape social media platforms, bringing both opportunities and threats such as biases and privacy concerns. This study explores how well humans can distinguish between human and bot participants in online discourse using an experimental framework called "LLMs Among Us," deployed on the Mastodon platform. The framework involved 30 bot participants created using three LLMs: GPT-4, Llama 2 Chat, and Claude 2, each assigned 10 personas based on global political influences. Human participants were asked to interact with these bots without knowing the bot/human ratio. Participants were surveyed after each of three experimental rounds to assess their ability to identify bots. Despite knowing the presence of both bots and humans, participants correctly identified the nature of other users only 42% of the time. The study found that the choice of persona had a more significant impact on human perception than the choice of LLM. Persona 8 was more likely to be identified as a bot, while Personas 3 and 6 were the least likely. The results indicate that LLMs can generate realistic discourse and may be used to manipulate information and digital conversations. The study also highlights the potential for LLMs to be used in deceptive ways, such as spreading misinformation. The findings suggest that while LLMs can be effective in generating content, they also pose risks in terms of privacy, ethics, and safety. The study concludes that further research is needed to understand the capabilities and potential dangers of LLMs in social media environments.The emergence of Large Language Models (LLMs) has the potential to reshape social media platforms, bringing both opportunities and threats such as biases and privacy concerns. This study explores how well humans can distinguish between human and bot participants in online discourse using an experimental framework called "LLMs Among Us," deployed on the Mastodon platform. The framework involved 30 bot participants created using three LLMs: GPT-4, Llama 2 Chat, and Claude 2, each assigned 10 personas based on global political influences. Human participants were asked to interact with these bots without knowing the bot/human ratio. Participants were surveyed after each of three experimental rounds to assess their ability to identify bots. Despite knowing the presence of both bots and humans, participants correctly identified the nature of other users only 42% of the time. The study found that the choice of persona had a more significant impact on human perception than the choice of LLM. Persona 8 was more likely to be identified as a bot, while Personas 3 and 6 were the least likely. The results indicate that LLMs can generate realistic discourse and may be used to manipulate information and digital conversations. The study also highlights the potential for LLMs to be used in deceptive ways, such as spreading misinformation. The findings suggest that while LLMs can be effective in generating content, they also pose risks in terms of privacy, ethics, and safety. The study concludes that further research is needed to understand the capabilities and potential dangers of LLMs in social media environments.
Reach us at info@study.space
[slides] LLMs Among Us%3A Generative AI Participating in Digital Discourse | StudySpace