The paper introduces Boosting of Thoughts (BoT), an automated prompting framework for large language models (LLMs) to solve complex problems through trial-and-error reasoning. BoT iteratively explores and self-evaluates multiple trees of thoughts, accumulating error analysis and detailed advice to refine the prompt. Starting with a simple prompt, BoT enhances it by adding *experience*—the detailed feedback from LLMs on the generated thought chains—until a final answer is reached. Experiments with GPT-4 and Llama2 on various complex mathematical problems demonstrate that BoT consistently achieves higher or comparable problem-solving rates compared to other advanced prompting approaches. The key contributions of BoT include its ability to rely solely on a simple initial prompt, perform an *experience*-driven iterative process, and achieve scalability and effectiveness across different tasks.The paper introduces Boosting of Thoughts (BoT), an automated prompting framework for large language models (LLMs) to solve complex problems through trial-and-error reasoning. BoT iteratively explores and self-evaluates multiple trees of thoughts, accumulating error analysis and detailed advice to refine the prompt. Starting with a simple prompt, BoT enhances it by adding *experience*—the detailed feedback from LLMs on the generated thought chains—until a final answer is reached. Experiments with GPT-4 and Llama2 on various complex mathematical problems demonstrate that BoT consistently achieves higher or comparable problem-solving rates compared to other advanced prompting approaches. The key contributions of BoT include its ability to rely solely on a simple initial prompt, perform an *experience*-driven iterative process, and achieve scalability and effectiveness across different tasks.