Diffusion Models (DMs) have achieved significant success in image generation and other domains. By sampling through the trajectory defined by a well-trained score model using SDE/ODE solvers, DMs can generate high-quality images. However, this process often requires multiple steps and is computationally intensive. To address this, instance-based distillation methods have been proposed to distill a one-step generator from DMs. However, these methods face limitations due to differences in local minima between teacher and student models, leading to suboptimal performance. To overcome this, the authors introduce a novel distributional distillation method using an exclusive distributional loss, which achieves state-of-the-art (SOTA) results with fewer training images. They also show that DMs have differentially activated layers at different time steps, enabling them to generate images in a single step. Freezing most of the convolutional layers in a DM during distributional distillation enhances this innate capability, leading to further performance improvements. The proposed method achieves SOTA results on CIFAR-10 (FID 1.54), AFHQv2 64x64 (FID 1.23), FFHQ 64x64 (FID 0.85), and ImageNet 64x64 (FID 1.16) with great efficiency, using only 5 million training images in 6 hours on 8 A100 GPUs. The study also explores the mechanism behind efficient distillation, revealing that DMs inherently possess the ability to generate images in one step. The results demonstrate that freezing most of the convolutional layers in a DM during distillation leads to better performance. The authors propose GDD (GAN Distillation at Distribution Level) and GDD-I (GAN Distillation at Distribution Level using Innate Ability), which achieve SOTA results on multiple datasets. GDD-I outperforms GDD in most datasets, except AFHQv2 64x64. The study highlights the inherent capability of DMs for one-step generation and the effectiveness of distributional distillation. The findings suggest that DMs can generate images in one step without requiring multiple steps, and that freezing most of the convolutional layers in a DM during distillation enhances this capability. The results demonstrate the robustness and effectiveness of the proposed methods, supporting the hypothesis that DMs are capable of generating results in one step innately. The study also identifies limitations, including the need for further verification on high-resolution datasets and the exploration of the roles of different layers in multi-step and one-step generation tasks. Additionally, the study notes that functions like image-editing and inpainting are yet to be implemented. The authors conclude that their approach provides a novel method for distilling one-step generators from DMs, achieving SOTA results with minimal computational resources and offering valuable insights into diffusion distillation.Diffusion Models (DMs) have achieved significant success in image generation and other domains. By sampling through the trajectory defined by a well-trained score model using SDE/ODE solvers, DMs can generate high-quality images. However, this process often requires multiple steps and is computationally intensive. To address this, instance-based distillation methods have been proposed to distill a one-step generator from DMs. However, these methods face limitations due to differences in local minima between teacher and student models, leading to suboptimal performance. To overcome this, the authors introduce a novel distributional distillation method using an exclusive distributional loss, which achieves state-of-the-art (SOTA) results with fewer training images. They also show that DMs have differentially activated layers at different time steps, enabling them to generate images in a single step. Freezing most of the convolutional layers in a DM during distributional distillation enhances this innate capability, leading to further performance improvements. The proposed method achieves SOTA results on CIFAR-10 (FID 1.54), AFHQv2 64x64 (FID 1.23), FFHQ 64x64 (FID 0.85), and ImageNet 64x64 (FID 1.16) with great efficiency, using only 5 million training images in 6 hours on 8 A100 GPUs. The study also explores the mechanism behind efficient distillation, revealing that DMs inherently possess the ability to generate images in one step. The results demonstrate that freezing most of the convolutional layers in a DM during distillation leads to better performance. The authors propose GDD (GAN Distillation at Distribution Level) and GDD-I (GAN Distillation at Distribution Level using Innate Ability), which achieve SOTA results on multiple datasets. GDD-I outperforms GDD in most datasets, except AFHQv2 64x64. The study highlights the inherent capability of DMs for one-step generation and the effectiveness of distributional distillation. The findings suggest that DMs can generate images in one step without requiring multiple steps, and that freezing most of the convolutional layers in a DM during distillation enhances this capability. The results demonstrate the robustness and effectiveness of the proposed methods, supporting the hypothesis that DMs are capable of generating results in one step innately. The study also identifies limitations, including the need for further verification on high-resolution datasets and the exploration of the roles of different layers in multi-step and one-step generation tasks. Additionally, the study notes that functions like image-editing and inpainting are yet to be implemented. The authors conclude that their approach provides a novel method for distilling one-step generators from DMs, achieving SOTA results with minimal computational resources and offering valuable insights into diffusion distillation.