ManiCM is a real-time 3D diffusion policy for robotic manipulation that leverages consistency models to enable one-step inference. The method addresses the runtime inefficiency of diffusion-based policies by imposing consistency constraints on the diffusion process, allowing the model to generate robot actions in a single inference step. The approach formulates a consistent diffusion process in the robot action space conditioned on point cloud inputs, where the original action is directly denoised from any point along the ODE trajectory. A consistency distillation technique is used to predict the action sample directly, rather than predicting noise, which accelerates convergence in the low-dimensional action manifold. The model is evaluated on 31 robotic manipulation tasks from Adroit and Metaworld, achieving an average inference speed acceleration of 10 times compared to the state-of-the-art method while maintaining competitive success rates. The results demonstrate that ManiCM significantly improves the efficiency of 3D robotic manipulation by enabling real-time action generation. The method introduces consistency distillation into robotic manipulation, accelerating high-quality 3D action generation to a real-time level. Key contributions include the proposal of a real-time 3D diffusion policy that learns robot actions conditioned on point clouds, the design of a manipulation consistency distillation technique for direct action prediction, and extensive experiments showing the effectiveness of the approach in 31 robotic manipulation tasks. The model outperforms existing methods in both runtime efficiency and success rate, demonstrating the potential of consistency models in robotic manipulation.ManiCM is a real-time 3D diffusion policy for robotic manipulation that leverages consistency models to enable one-step inference. The method addresses the runtime inefficiency of diffusion-based policies by imposing consistency constraints on the diffusion process, allowing the model to generate robot actions in a single inference step. The approach formulates a consistent diffusion process in the robot action space conditioned on point cloud inputs, where the original action is directly denoised from any point along the ODE trajectory. A consistency distillation technique is used to predict the action sample directly, rather than predicting noise, which accelerates convergence in the low-dimensional action manifold. The model is evaluated on 31 robotic manipulation tasks from Adroit and Metaworld, achieving an average inference speed acceleration of 10 times compared to the state-of-the-art method while maintaining competitive success rates. The results demonstrate that ManiCM significantly improves the efficiency of 3D robotic manipulation by enabling real-time action generation. The method introduces consistency distillation into robotic manipulation, accelerating high-quality 3D action generation to a real-time level. Key contributions include the proposal of a real-time 3D diffusion policy that learns robot actions conditioned on point clouds, the design of a manipulation consistency distillation technique for direct action prediction, and extensive experiments showing the effectiveness of the approach in 31 robotic manipulation tasks. The model outperforms existing methods in both runtime efficiency and success rate, demonstrating the potential of consistency models in robotic manipulation.