This paper introduces MotionLCM, a real-time controllable motion generation model that achieves high-quality text-to-motion synthesis and precise motion control in approximately 30 milliseconds. The model is built on the latent diffusion model (MLD) and employs one-step or few-step inference to improve runtime efficiency. To enable effective controllability, a motion ControlNet is incorporated into the latent space of MotionLCM, allowing explicit control signals (e.g., pelvis trajectory) to influence the generation process directly. Experimental results demonstrate that MotionLCM can generate high-quality human motions with text and control signals in real-time, maintaining a balance between generation quality and efficiency. The key contributions of the paper include the introduction of MotionLCM, the development of a motion ControlNet for controllable motion generation, and extensive experimental validation showing the effectiveness of the proposed method.This paper introduces MotionLCM, a real-time controllable motion generation model that achieves high-quality text-to-motion synthesis and precise motion control in approximately 30 milliseconds. The model is built on the latent diffusion model (MLD) and employs one-step or few-step inference to improve runtime efficiency. To enable effective controllability, a motion ControlNet is incorporated into the latent space of MotionLCM, allowing explicit control signals (e.g., pelvis trajectory) to influence the generation process directly. Experimental results demonstrate that MotionLCM can generate high-quality human motions with text and control signals in real-time, maintaining a balance between generation quality and efficiency. The key contributions of the paper include the introduction of MotionLCM, the development of a motion ControlNet for controllable motion generation, and extensive experimental validation showing the effectiveness of the proposed method.