Large Language Models Are Neurosymbolic Reasoners

Large Language Models Are Neurosymbolic Reasoners

2024 | Meng Fang, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy, Jun Wang
This paper explores the potential of Large Language Models (LLMs) as neurosymbolic reasoners in text-based games. The study focuses on tasks that require symbolic reasoning, such as arithmetic, map reading, sorting, and common sense reasoning in text-based environments. The proposed approach involves an LLM agent that interacts with symbolic modules to perform these tasks. The agent is initialized with a role and task description, then receives observations and valid actions from the game environment. It uses these inputs to select actions, which are then executed in the game environment or symbolic module. The LLM agent is designed to leverage both the text-based game environment and symbolic modules to generate valid actions, which are then used to interact with the environment and complete the task. The experiments demonstrate that the LLM agent significantly enhances the capability of LLMs as automated agents for symbolic reasoning, achieving an average performance of 88% across all tasks. The agent outperforms strong baselines, including the Deep Reinforcement Relevance Network with symbolic modules and the Behavior Cloned Transformer trained with extensive expert data. The results show that the LLM agent is effective in text-based games involving symbolic tasks, particularly in mathematics and map reading. However, the agent exhibits suboptimal performance in sorting tasks due to limited memory capacity. The study also highlights the effectiveness of constrained prompts in improving the agent's performance across all tasks. The findings suggest that LLMs can be considered as neurosymbolic reasoners, capable of performing symbolic tasks in real-world applications. The paper also discusses the limitations of the approach, including the need for more detailed prompts and the potential for extending the model's application to more complex domains.This paper explores the potential of Large Language Models (LLMs) as neurosymbolic reasoners in text-based games. The study focuses on tasks that require symbolic reasoning, such as arithmetic, map reading, sorting, and common sense reasoning in text-based environments. The proposed approach involves an LLM agent that interacts with symbolic modules to perform these tasks. The agent is initialized with a role and task description, then receives observations and valid actions from the game environment. It uses these inputs to select actions, which are then executed in the game environment or symbolic module. The LLM agent is designed to leverage both the text-based game environment and symbolic modules to generate valid actions, which are then used to interact with the environment and complete the task. The experiments demonstrate that the LLM agent significantly enhances the capability of LLMs as automated agents for symbolic reasoning, achieving an average performance of 88% across all tasks. The agent outperforms strong baselines, including the Deep Reinforcement Relevance Network with symbolic modules and the Behavior Cloned Transformer trained with extensive expert data. The results show that the LLM agent is effective in text-based games involving symbolic tasks, particularly in mathematics and map reading. However, the agent exhibits suboptimal performance in sorting tasks due to limited memory capacity. The study also highlights the effectiveness of constrained prompts in improving the agent's performance across all tasks. The findings suggest that LLMs can be considered as neurosymbolic reasoners, capable of performing symbolic tasks in real-world applications. The paper also discusses the limitations of the approach, including the need for more detailed prompts and the potential for extending the model's application to more complex domains.
Reach us at info@study.space