This paper introduces a novel approach to train specialized language model (LLM) agents by treating functions as learnable parameters. The method enables LLM agents to improve their performance on downstream tasks without modifying the LLM weights, which is particularly useful when LLMs are difficult or inaccessible for modification. Inspired by how humans forge tools to adapt to real-world tasks, the method progressively forges agent functions to better solve tasks, rather than modifying the LLM weights. The proposed method, called AgentOptimizer, leverages the LLM to update agent functions based on execution history and performance on training tasks. It employs two strategies, roll-back and early-stop, to streamline the training process. The method is evaluated on three tasks: mathematical reasoning (MATH), tabular processing (TabMWP), and general real-world problems (GAIA). The results show that agent training significantly improves the performance of LLM agents on these tasks. The method is also shown to be effective in terms of learning curve and domain transferability. The method has been integrated into the AutoGen library. The contributions include a new paradigm for training LLM agents, a novel method for agent training, and extensive experiments demonstrating the effectiveness of the method.This paper introduces a novel approach to train specialized language model (LLM) agents by treating functions as learnable parameters. The method enables LLM agents to improve their performance on downstream tasks without modifying the LLM weights, which is particularly useful when LLMs are difficult or inaccessible for modification. Inspired by how humans forge tools to adapt to real-world tasks, the method progressively forges agent functions to better solve tasks, rather than modifying the LLM weights. The proposed method, called AgentOptimizer, leverages the LLM to update agent functions based on execution history and performance on training tasks. It employs two strategies, roll-back and early-stop, to streamline the training process. The method is evaluated on three tasks: mathematical reasoning (MATH), tabular processing (TabMWP), and general real-world problems (GAIA). The results show that agent training significantly improves the performance of LLM agents on these tasks. The method is also shown to be effective in terms of learning curve and domain transferability. The method has been integrated into the AutoGen library. The contributions include a new paradigm for training LLM agents, a novel method for agent training, and extensive experiments demonstrating the effectiveness of the method.