The paper "Diffusion Models Beat GANs on Image Synthesis" by Prafulla Dhariwal and Alex Nichol from OpenAI demonstrates that diffusion models can achieve superior image sample quality compared to state-of-the-art generative models, particularly GANs. The authors achieve this by improving the architecture of diffusion models through a series of ablations and introducing classifier guidance, a method to trade off diversity for fidelity using gradients from a classifier. They report FID scores of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, matching BigGAN-deep with as few as 25 forward passes per sample. Classifier guidance is further combined with upsampling diffusion models, improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. The paper also discusses the limitations and future work, including the need for faster sampling and the potential for extending classifier guidance to unlabeled datasets.The paper "Diffusion Models Beat GANs on Image Synthesis" by Prafulla Dhariwal and Alex Nichol from OpenAI demonstrates that diffusion models can achieve superior image sample quality compared to state-of-the-art generative models, particularly GANs. The authors achieve this by improving the architecture of diffusion models through a series of ablations and introducing classifier guidance, a method to trade off diversity for fidelity using gradients from a classifier. They report FID scores of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, matching BigGAN-deep with as few as 25 forward passes per sample. Classifier guidance is further combined with upsampling diffusion models, improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. The paper also discusses the limitations and future work, including the need for faster sampling and the potential for extending classifier guidance to unlabeled datasets.