Particle Denoising Diffusion Sampler

Particle Denoising Diffusion Sampler

2024 | Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
This paper introduces the Particle Denoising Diffusion Sampler (PDDS), a novel method for Monte Carlo sampling and normalizing constant estimation. PDDS is inspired by denoising diffusion models but is adapted for sampling from unnormalized probability densities and estimating their normalizing constants. The core idea is to simulate the time-reversed diffusion process using an iterative particle scheme based on a novel score matching loss, which allows for asymptotically consistent estimates under mild assumptions. The paper first outlines the theoretical framework of denoising diffusions, including the noising and denoising diffusions, and discusses the challenges in simulating the time-reversed diffusion process due to intractable score terms and the need for discretization. It then presents the concept of guided diffusions, which combine pre-trained diffusion models with guidance terms derived from likelihoods to sample from posterior distributions. The paper introduces PDDS as a particle method to correct the discrepancy between the distribution outputted by the guided diffusion and the target. It describes the algorithm in detail, including the use of sequential Monte Carlo (SMC) techniques, resampling strategies, and optional MCMC steps to improve performance. Theoretical results are provided, showing that PDDS provides unbiased estimates of the normalizing constant and asymptotically consistent estimates of expectations with respect to the target distribution. The paper also discusses the use of score matching to learn better approximations of the potential functions $ g_k $, which are used in the denoising diffusion process. It presents a neural network parametrization for the potential functions and describes the training process using two score matching loss functions: Denoising Score Matching (DSM) and Novel Score Matching (NSM). The NSM loss is shown to be more effective in reducing variance compared to DSM, especially when the discretization step is small. The paper evaluates the performance of PDDS on various tasks, including normalizing constant estimation and sample quality assessment. It compares PDDS with other methods such as SMC, CRAFT, DDS, and PIS, showing that PDDS provides more accurate estimates of the normalizing constant and better sample quality, particularly in challenging multimodal and high-dimensional settings. The results demonstrate that PDDS outperforms these methods in terms of estimation bias and variance, especially when using MCMC steps. The paper concludes by discussing the limitations of PDDS, such as its reliance on a well-behaved potential function $ g_0 $, and suggests that more sophisticated techniques may be needed for certain applications. Overall, PDDS provides a promising approach for Monte Carlo sampling and normalizing constant estimation, leveraging the strengths of denoising diffusion models and sequential Monte Carlo methods.This paper introduces the Particle Denoising Diffusion Sampler (PDDS), a novel method for Monte Carlo sampling and normalizing constant estimation. PDDS is inspired by denoising diffusion models but is adapted for sampling from unnormalized probability densities and estimating their normalizing constants. The core idea is to simulate the time-reversed diffusion process using an iterative particle scheme based on a novel score matching loss, which allows for asymptotically consistent estimates under mild assumptions. The paper first outlines the theoretical framework of denoising diffusions, including the noising and denoising diffusions, and discusses the challenges in simulating the time-reversed diffusion process due to intractable score terms and the need for discretization. It then presents the concept of guided diffusions, which combine pre-trained diffusion models with guidance terms derived from likelihoods to sample from posterior distributions. The paper introduces PDDS as a particle method to correct the discrepancy between the distribution outputted by the guided diffusion and the target. It describes the algorithm in detail, including the use of sequential Monte Carlo (SMC) techniques, resampling strategies, and optional MCMC steps to improve performance. Theoretical results are provided, showing that PDDS provides unbiased estimates of the normalizing constant and asymptotically consistent estimates of expectations with respect to the target distribution. The paper also discusses the use of score matching to learn better approximations of the potential functions $ g_k $, which are used in the denoising diffusion process. It presents a neural network parametrization for the potential functions and describes the training process using two score matching loss functions: Denoising Score Matching (DSM) and Novel Score Matching (NSM). The NSM loss is shown to be more effective in reducing variance compared to DSM, especially when the discretization step is small. The paper evaluates the performance of PDDS on various tasks, including normalizing constant estimation and sample quality assessment. It compares PDDS with other methods such as SMC, CRAFT, DDS, and PIS, showing that PDDS provides more accurate estimates of the normalizing constant and better sample quality, particularly in challenging multimodal and high-dimensional settings. The results demonstrate that PDDS outperforms these methods in terms of estimation bias and variance, especially when using MCMC steps. The paper concludes by discussing the limitations of PDDS, such as its reliance on a well-behaved potential function $ g_0 $, and suggests that more sophisticated techniques may be needed for certain applications. Overall, PDDS provides a promising approach for Monte Carlo sampling and normalizing constant estimation, leveraging the strengths of denoising diffusion models and sequential Monte Carlo methods.
Reach us at info@study.space
[slides and audio] Particle Denoising Diffusion Sampler