Can Large Language Model Agents Simulate Human Trust Behaviors?

Can Large Language Model Agents Simulate Human Trust Behaviors?

10 Mar 2024 | Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li
Can Large Language Model Agents Simulate Human Trust Behaviors? This paper investigates whether Large Language Model (LLM) agents can simulate human trust behaviors. Using Trust Games and the Belief-Desire-Intention (BDI) framework, we find that LLM agents generally exhibit trust behaviors, particularly GPT-4, and show high behavioral alignment with humans. We explore the biases in agent trust and differences in trust towards agents versus humans. We also examine how advanced reasoning strategies and external manipulations affect agent trust. Our findings suggest that LLM agents can simulate human trust behaviors, which has important implications for simulating complex human interactions and societal structures. We also find that agent trust is biased towards humans and is easier to undermine than to enhance. These results provide new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans. The study highlights the potential of LLM agents to simulate human trust behaviors and opens new directions for understanding the fundamental analogy between LLMs and humans.Can Large Language Model Agents Simulate Human Trust Behaviors? This paper investigates whether Large Language Model (LLM) agents can simulate human trust behaviors. Using Trust Games and the Belief-Desire-Intention (BDI) framework, we find that LLM agents generally exhibit trust behaviors, particularly GPT-4, and show high behavioral alignment with humans. We explore the biases in agent trust and differences in trust towards agents versus humans. We also examine how advanced reasoning strategies and external manipulations affect agent trust. Our findings suggest that LLM agents can simulate human trust behaviors, which has important implications for simulating complex human interactions and societal structures. We also find that agent trust is biased towards humans and is easier to undermine than to enhance. These results provide new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans. The study highlights the potential of LLM agents to simulate human trust behaviors and opens new directions for understanding the fundamental analogy between LLMs and humans.
Reach us at info@study.space