Symbolic Learning Enables Self-Evolving Agents

Symbolic Learning Enables Self-Evolving Agents

26 Jun 2024 | Wangchunshu Zhou, Yixin Ou, Shengwei Ding, Long Li, Jialong Wu, Tiannan Wang, Jiamin Chen, Shuai Wang, Xiaohua Xu, Ningyu Zhang, Huajun Chen, Yuchen Eleanor Jiang
This paper introduces agent symbolic learning, a framework that enables language agents to self-evolve by learning from data. Current language agents are model-centric or engineering-centric, requiring manual intervention for optimization. The proposed framework, inspired by connectionist learning, allows agents to autonomously optimize their symbolic components using language-based loss, gradients, and optimizers. The framework treats agents as symbolic networks where prompts, tools, and their stacking define the weights. By mimicking back-propagation and gradient descent, the framework enables agents to update themselves after deployment, leading to "self-evolving agents." The framework conducts a forward pass, storing input, output, prompts, and tool usage in a trajectory. A language loss is computed using a prompt-based loss function, followed by back-propagation to generate language gradients. These gradients are then used to update prompts, tools, and the agent pipeline. The framework supports multi-agent systems by treating nodes as separate agents or allowing multiple agents to act within a node. Experiments on standard LLM benchmarks and complex agentic tasks show that the framework outperforms existing methods. It achieves significant improvements on tasks like software development and creative writing, demonstrating its effectiveness in real-world scenarios. The framework is open-sourced to facilitate future research on data-centric agent learning. The agent symbolic learning framework is a holistic optimization method that jointly optimizes all symbolic components within an agent system, including prompts, tools, and the pipeline. This approach enables agents to learn from data and adapt to new tasks, moving research from engineering-centric to data-centric. The framework supports batched training and includes mechanisms for handling errors and rollbacks, ensuring robust performance. The results show that the framework is more robust and effective in complex real-world tasks compared to traditional methods. This transition from model-centric to data-centric research is a significant step towards achieving artificial general intelligence.This paper introduces agent symbolic learning, a framework that enables language agents to self-evolve by learning from data. Current language agents are model-centric or engineering-centric, requiring manual intervention for optimization. The proposed framework, inspired by connectionist learning, allows agents to autonomously optimize their symbolic components using language-based loss, gradients, and optimizers. The framework treats agents as symbolic networks where prompts, tools, and their stacking define the weights. By mimicking back-propagation and gradient descent, the framework enables agents to update themselves after deployment, leading to "self-evolving agents." The framework conducts a forward pass, storing input, output, prompts, and tool usage in a trajectory. A language loss is computed using a prompt-based loss function, followed by back-propagation to generate language gradients. These gradients are then used to update prompts, tools, and the agent pipeline. The framework supports multi-agent systems by treating nodes as separate agents or allowing multiple agents to act within a node. Experiments on standard LLM benchmarks and complex agentic tasks show that the framework outperforms existing methods. It achieves significant improvements on tasks like software development and creative writing, demonstrating its effectiveness in real-world scenarios. The framework is open-sourced to facilitate future research on data-centric agent learning. The agent symbolic learning framework is a holistic optimization method that jointly optimizes all symbolic components within an agent system, including prompts, tools, and the pipeline. This approach enables agents to learn from data and adapt to new tasks, moving research from engineering-centric to data-centric. The framework supports batched training and includes mechanisms for handling errors and rollbacks, ensuring robust performance. The results show that the framework is more robust and effective in complex real-world tasks compared to traditional methods. This transition from model-centric to data-centric research is a significant step towards achieving artificial general intelligence.
Reach us at info@study.space
Understanding Symbolic Learning Enables Self-Evolving Agents