This paper introduces an iterative experience refinement framework for software-developing agents powered by large language models (LLMs). The framework enables agents to refine their experiences iteratively during task execution, improving their adaptability and efficiency. Two fundamental patterns are proposed: the successive pattern, which refines experiences based on the latest task batch, and the cumulative pattern, which integrates experiences from all previous task batches. A heuristic experience elimination mechanism is also introduced to prioritize high-quality and frequently-used experiences, streamlining the experience space and enhancing efficiency. Extensive experiments show that while the successive pattern may yield superior results, the cumulative pattern provides more stable performance. Additionally, experience elimination facilitates achieving better performance using just 11.54% of a high-quality subset. The framework is evaluated against several baselines, including GPT-Engineer, MetaGPT, ChatDev, and ECL, demonstrating its effectiveness in improving software generation quality, executability, and consistency. The results show that the cumulative pattern provides more stable performance, while the successive pattern may achieve higher results. The framework also shows that experience elimination is crucial for maintaining the quality of the experience pool, especially for the cumulative pattern. The paper concludes that the proposed framework enables LLM agents to refine experiences iteratively during continual task execution, leading to improved performance and adaptability.This paper introduces an iterative experience refinement framework for software-developing agents powered by large language models (LLMs). The framework enables agents to refine their experiences iteratively during task execution, improving their adaptability and efficiency. Two fundamental patterns are proposed: the successive pattern, which refines experiences based on the latest task batch, and the cumulative pattern, which integrates experiences from all previous task batches. A heuristic experience elimination mechanism is also introduced to prioritize high-quality and frequently-used experiences, streamlining the experience space and enhancing efficiency. Extensive experiments show that while the successive pattern may yield superior results, the cumulative pattern provides more stable performance. Additionally, experience elimination facilitates achieving better performance using just 11.54% of a high-quality subset. The framework is evaluated against several baselines, including GPT-Engineer, MetaGPT, ChatDev, and ECL, demonstrating its effectiveness in improving software generation quality, executability, and consistency. The results show that the cumulative pattern provides more stable performance, while the successive pattern may achieve higher results. The framework also shows that experience elimination is crucial for maintaining the quality of the experience pool, especially for the cumulative pattern. The paper concludes that the proposed framework enables LLM agents to refine experiences iteratively during continual task execution, leading to improved performance and adaptability.