This paper presents a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations in real-time. The core of the method is a transformer-based Conditional Autoregressive Motion Diffusion Model (CAMDM), which takes as input the character's historical motion and can generate a range of diverse potential future motions conditioned on high-level, coarse user control. To meet the demands for diversity, controllability, and computational efficiency required by a real-time controller, several key algorithmic designs are incorporated. These include separate condition tokenization, classifier-free guidance on past motion, and heuristic future trajectory extension, all designed to address the challenges associated with taming motion diffusion probabilistic models for character control. The method is evaluated on a diverse set of locomotion skills, demonstrating the merits of the method over existing character controllers. The results show that the method produces high-quality and diverse character animations in real-time, supporting animating the character in multiple styles with a single unified model. The method is also shown to be effective in generating seamless transitions between different styles, even in cases where the transition data is absent from the dataset. The method is implemented with a single A100 GPU for training and a RTX 3060 GPU for real-time inference. The results show that the method outperforms other state-of-the-art methods in terms of motion quality, condition alignment, and transition success rate. The method is also shown to be effective in generating diverse motions, even when the transition data is absent from the dataset. The method is also shown to be effective in generating natural and agile motions for transitioning between different styles, even when the transition motions between styles do not exist in the dataset. The method is also shown to be effective in generating diverse motions, even when the transition data is absent from the dataset. The method is also shown to be effective in generating natural and agile motions for transitioning between different styles, even when the transition motions between styles do not exist in the dataset.This paper presents a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations in real-time. The core of the method is a transformer-based Conditional Autoregressive Motion Diffusion Model (CAMDM), which takes as input the character's historical motion and can generate a range of diverse potential future motions conditioned on high-level, coarse user control. To meet the demands for diversity, controllability, and computational efficiency required by a real-time controller, several key algorithmic designs are incorporated. These include separate condition tokenization, classifier-free guidance on past motion, and heuristic future trajectory extension, all designed to address the challenges associated with taming motion diffusion probabilistic models for character control. The method is evaluated on a diverse set of locomotion skills, demonstrating the merits of the method over existing character controllers. The results show that the method produces high-quality and diverse character animations in real-time, supporting animating the character in multiple styles with a single unified model. The method is also shown to be effective in generating seamless transitions between different styles, even in cases where the transition data is absent from the dataset. The method is implemented with a single A100 GPU for training and a RTX 3060 GPU for real-time inference. The results show that the method outperforms other state-of-the-art methods in terms of motion quality, condition alignment, and transition success rate. The method is also shown to be effective in generating diverse motions, even when the transition data is absent from the dataset. The method is also shown to be effective in generating natural and agile motions for transitioning between different styles, even when the transition motions between styles do not exist in the dataset. The method is also shown to be effective in generating diverse motions, even when the transition data is absent from the dataset. The method is also shown to be effective in generating natural and agile motions for transitioning between different styles, even when the transition motions between styles do not exist in the dataset.