This paper compares the ability of single large language models (LLMs) and multi-agent systems to simulate human strategic reasoning in the ultimatum game, a classic economics experiment. The study finds that multi-agent systems outperform single LLMs in simulating human-like behavior, with 87.5% accuracy compared to 50% for single LLMs. Multi-agent systems are better at creating strategies that are logically complete and consistent with personality, and they more accurately simulate human reasoning in the game. The results suggest that multi-agent systems have potential to simulate human strategic behavior in complex scenarios, aiding decision-makers in understanding how people behave in systems. The study highlights the importance of considering personality differences and strategic thinking in policy design and decision-making. While single LLMs are less effective, multi-agent systems show greater promise in simulating human behavior, particularly in scenarios involving strategic reasoning and social interactions. The research also identifies limitations of LLMs in simulating human behavior for unprecedented events and emphasizes the need for further research into how LLMs can simulate strategic human behavior in novel scenarios. Overall, the study demonstrates that multi-agent systems are more accurate in simulating human strategic reasoning than single LLMs, making them a valuable tool for decision-makers in policy and system design.This paper compares the ability of single large language models (LLMs) and multi-agent systems to simulate human strategic reasoning in the ultimatum game, a classic economics experiment. The study finds that multi-agent systems outperform single LLMs in simulating human-like behavior, with 87.5% accuracy compared to 50% for single LLMs. Multi-agent systems are better at creating strategies that are logically complete and consistent with personality, and they more accurately simulate human reasoning in the game. The results suggest that multi-agent systems have potential to simulate human strategic behavior in complex scenarios, aiding decision-makers in understanding how people behave in systems. The study highlights the importance of considering personality differences and strategic thinking in policy design and decision-making. While single LLMs are less effective, multi-agent systems show greater promise in simulating human behavior, particularly in scenarios involving strategic reasoning and social interactions. The research also identifies limitations of LLMs in simulating human behavior for unprecedented events and emphasizes the need for further research into how LLMs can simulate strategic human behavior in novel scenarios. Overall, the study demonstrates that multi-agent systems are more accurate in simulating human strategic reasoning than single LLMs, making them a valuable tool for decision-makers in policy and system design.