Accelerating Convergence of Score-Based Diffusion Models, Provably

Accelerating Convergence of Score-Based Diffusion Models, Provably

March 7, 2024 | Gen Li†, Yu Huang*,‡, Timofey Efimov§, Yuting Wei‡, Yuejie Chi§, Yuxin Chen‡
This paper addresses the issue of slow sampling speed in score-based diffusion models, which often require extensive function evaluations during the sampling phase. Despite recent efforts to speed up diffusion generative modeling, theoretical foundations for acceleration techniques remain limited. The authors propose novel training-free algorithms to accelerate both deterministic (DDIM) and stochastic (DDPM) samplers. The accelerated deterministic sampler converges at a rate of \(O(\frac{1}{T})\), improving upon the \(O(\frac{1}{\sqrt{T}})\) rate of the DDPM sampler. The accelerated stochastic sampler also converges at a rate of \(O(\frac{1}{T})\), outperforming the \(O(\frac{1}{\sqrt{T}})\) rate of the DDPM sampler. The design of these algorithms leverages insights from higher-order approximation and shares similarities with popular high-order ODE solvers like DPM-Solver-2. The theory accommodates \(\ell_2\) accurate score estimates and does not require log-concavity or smoothness on the target distribution. The paper includes experimental results demonstrating the effectiveness of the proposed samplers compared to their non-accelerated counterparts.This paper addresses the issue of slow sampling speed in score-based diffusion models, which often require extensive function evaluations during the sampling phase. Despite recent efforts to speed up diffusion generative modeling, theoretical foundations for acceleration techniques remain limited. The authors propose novel training-free algorithms to accelerate both deterministic (DDIM) and stochastic (DDPM) samplers. The accelerated deterministic sampler converges at a rate of \(O(\frac{1}{T})\), improving upon the \(O(\frac{1}{\sqrt{T}})\) rate of the DDPM sampler. The accelerated stochastic sampler also converges at a rate of \(O(\frac{1}{T})\), outperforming the \(O(\frac{1}{\sqrt{T}})\) rate of the DDPM sampler. The design of these algorithms leverages insights from higher-order approximation and shares similarities with popular high-order ODE solvers like DPM-Solver-2. The theory accommodates \(\ell_2\) accurate score estimates and does not require log-concavity or smoothness on the target distribution. The paper includes experimental results demonstrating the effectiveness of the proposed samplers compared to their non-accelerated counterparts.
Reach us at info@study.space
[slides] Accelerating Convergence of Score-Based Diffusion Models%2C Provably | StudySpace