This paper presents a large language model (LLM)-based system that enables quadrupedal robots to perform long-horizon tasks requiring problem-solving abilities beyond short-term motions. The system combines high-level reasoning with large language models (LLMs) to generate hybrid discrete-continuous plans from task descriptions, and integrates reinforcement learning (RL) at the low level to train motion planning and control skills for rich environmental interactions. The LLM-based reasoning layer includes three agents: a semantic planner for task decomposition, a parameter calculator for predicting arguments in the plan, and a code generator to convert the plan into executable robot code. The system is tested on long-horizon tasks such as turning off the lights before exiting an office and delivering a package into a room with a closed door. Simulation and real-world experiments show that the system successfully figures out multi-step strategies and demonstrates non-trivial behaviors, including building tools or notifying a human for help. The system achieves a success rate of over 70% in simulation and successful deployment in the real world. The paper also compares the proposed method with other approaches, including hierarchical RL and behavior trees, and shows that the LLM-based reasoning module outperforms them in terms of success rate and task completion. The system is designed to handle complex tasks that require a combination of locomotion and manipulation skills, and it is capable of adapting to different environments and scenarios. The paper highlights the importance of combining high-level reasoning with low-level control to achieve long-horizon task execution in robotics.This paper presents a large language model (LLM)-based system that enables quadrupedal robots to perform long-horizon tasks requiring problem-solving abilities beyond short-term motions. The system combines high-level reasoning with large language models (LLMs) to generate hybrid discrete-continuous plans from task descriptions, and integrates reinforcement learning (RL) at the low level to train motion planning and control skills for rich environmental interactions. The LLM-based reasoning layer includes three agents: a semantic planner for task decomposition, a parameter calculator for predicting arguments in the plan, and a code generator to convert the plan into executable robot code. The system is tested on long-horizon tasks such as turning off the lights before exiting an office and delivering a package into a room with a closed door. Simulation and real-world experiments show that the system successfully figures out multi-step strategies and demonstrates non-trivial behaviors, including building tools or notifying a human for help. The system achieves a success rate of over 70% in simulation and successful deployment in the real world. The paper also compares the proposed method with other approaches, including hierarchical RL and behavior trees, and shows that the LLM-based reasoning module outperforms them in terms of success rate and task completion. The system is designed to handle complex tasks that require a combination of locomotion and manipulation skills, and it is capable of adapting to different environments and scenarios. The paper highlights the importance of combining high-level reasoning with low-level control to achieve long-horizon task execution in robotics.