Meta-prompting is a scaffolding technique that enhances the functionality of language models (LMs) by enabling them to act as both a conductor and a panel of diverse experts. This approach involves using high-level instructions to break down complex tasks into smaller, manageable subtasks, which are then handled by specialized "expert" instances of the same LM. The LM, acting as the conductor, ensures seamless communication and integration of the outputs from these expert models, while also applying its own critical thinking and verification processes to refine and authenticate the final result. The zero-shot, task-agnostic nature of meta-prompting simplifies user interaction by eliminating the need for detailed, task-specific instructions. Additionally, the integration of external tools like a Python interpreter into the meta-prompting framework broadens its applicability and utility. Through experiments with GPT-4, meta-prompting outperforms conventional scaffolding methods, achieving significant improvements in accuracy across various tasks, including the Game of 24, Checkmate-in-One, and Python Programming Puzzles. The effectiveness of meta-prompting is demonstrated through its ability to leverage the collective intelligence of multiple expert models, leading to more accurate and reliable responses. The framework also allows for the dynamic selection of experts and the use of real-time code execution, enhancing the efficiency and precision of problem-solving. However, the approach faces challenges such as cost efficiency, scalability, and operational linearity, which may limit its applicability in smaller models. Despite these limitations, meta-prompting shows significant potential for broad application beyond strictly computational problems.Meta-prompting is a scaffolding technique that enhances the functionality of language models (LMs) by enabling them to act as both a conductor and a panel of diverse experts. This approach involves using high-level instructions to break down complex tasks into smaller, manageable subtasks, which are then handled by specialized "expert" instances of the same LM. The LM, acting as the conductor, ensures seamless communication and integration of the outputs from these expert models, while also applying its own critical thinking and verification processes to refine and authenticate the final result. The zero-shot, task-agnostic nature of meta-prompting simplifies user interaction by eliminating the need for detailed, task-specific instructions. Additionally, the integration of external tools like a Python interpreter into the meta-prompting framework broadens its applicability and utility. Through experiments with GPT-4, meta-prompting outperforms conventional scaffolding methods, achieving significant improvements in accuracy across various tasks, including the Game of 24, Checkmate-in-One, and Python Programming Puzzles. The effectiveness of meta-prompting is demonstrated through its ability to leverage the collective intelligence of multiple expert models, leading to more accurate and reliable responses. The framework also allows for the dynamic selection of experts and the use of real-time code execution, enhancing the efficiency and precision of problem-solving. However, the approach faces challenges such as cost efficiency, scalability, and operational linearity, which may limit its applicability in smaller models. Despite these limitations, meta-prompting shows significant potential for broad application beyond strictly computational problems.