HumanoidBench is a simulated humanoid robot benchmark designed to evaluate whole-body locomotion and manipulation tasks. It includes 15 whole-body manipulation tasks and 12 locomotion tasks, such as shelf rearrangement, package unloading, and maze navigation. The benchmark features a humanoid robot with dexterous hands and a variety of challenging tasks, providing a platform for researchers to identify challenges in solving diverse tasks with humanoid robots. The benchmark is based on the MuJoCo physics engine and uses a Unitree H1 humanoid robot with two dexterous Shadow Hands. The benchmark includes a wide range of tasks, from simple locomotion to complex manipulation, and supports both learning and model-based approaches. The benchmarking results show that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies. The benchmark provides a standardized platform for evaluating progress in high-dimensional humanoid robot learning and control. The open-source code is available at https://humanoid-bench.github.io. The benchmark includes a variety of tasks, such as walking, running, reaching, and manipulating objects, and supports both learning and model-based approaches. The benchmarking results show that hierarchical reinforcement learning outperforms flat, end-to-end approaches in complex tasks. The benchmark also highlights common failures in tasks requiring long-horizon planning and coordination of multiple body parts. The benchmark is designed to stimulate further research in whole-body algorithms for robotic platforms.HumanoidBench is a simulated humanoid robot benchmark designed to evaluate whole-body locomotion and manipulation tasks. It includes 15 whole-body manipulation tasks and 12 locomotion tasks, such as shelf rearrangement, package unloading, and maze navigation. The benchmark features a humanoid robot with dexterous hands and a variety of challenging tasks, providing a platform for researchers to identify challenges in solving diverse tasks with humanoid robots. The benchmark is based on the MuJoCo physics engine and uses a Unitree H1 humanoid robot with two dexterous Shadow Hands. The benchmark includes a wide range of tasks, from simple locomotion to complex manipulation, and supports both learning and model-based approaches. The benchmarking results show that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies. The benchmark provides a standardized platform for evaluating progress in high-dimensional humanoid robot learning and control. The open-source code is available at https://humanoid-bench.github.io. The benchmark includes a variety of tasks, such as walking, running, reaching, and manipulating objects, and supports both learning and model-based approaches. The benchmarking results show that hierarchical reinforcement learning outperforms flat, end-to-end approaches in complex tasks. The benchmark also highlights common failures in tasks requiring long-horizon planning and coordination of multiple body parts. The benchmark is designed to stimulate further research in whole-body algorithms for robotic platforms.