2024 | Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
The article introduces the Particle Denoising Diffusion Sampler (PDDS), a novel method for sampling from unnormalized probability densities and estimating their normalizing constants. PDDS builds on the concept of denoising diffusion models but incorporates a particle-based approach to achieve asymptotically consistent estimates under mild assumptions. The method uses a particle scheme that relies on a novel score matching loss to simulate the time-reversed diffusion process, which allows for sampling from the target distribution starting from Gaussian noise. Unlike standard denoising diffusion models, PDDS addresses the challenge of intractable score terms by approximating them using a particle-based approach, enabling more accurate and efficient sampling.
The paper discusses the theoretical foundations of PDDS, including the use of guided diffusions and the role of score matching in approximating the score terms. It also presents a detailed algorithm for PDDS, which involves a sequence of particle updates, weighting, and resampling steps to approximate the target distribution. The method is shown to provide consistent estimates of the normalizing constant and expectations with respect to the target distribution. Theoretical results are provided, demonstrating the asymptotic consistency of PDDS and its performance in various sampling tasks.
The article also explores the use of score matching to learn better approximations of the potential functions $g_k$, which are used to guide the diffusion process. This involves training neural networks to approximate the scores of the target distribution, which in turn improves the accuracy of the PDDS algorithm. The paper compares PDDS with other sampling methods, including SMC, CRAFT, and diffusion-based samplers, showing that PDDS achieves better performance in terms of estimation bias and variance, particularly in high-dimensional and multimodal sampling tasks.
Experimental results demonstrate that PDDS outperforms other methods in normalizing constant estimation and sample quality, especially in challenging tasks such as Gaussian mixture models with multiple modes. The method is also shown to be effective in handling multimodal and high-dimensional sampling problems, with the ability to produce samples that closely match the target distribution. The paper concludes that PDDS provides a robust and efficient approach to sampling from unnormalized distributions, with potential applications in a wide range of statistical and machine learning tasks.The article introduces the Particle Denoising Diffusion Sampler (PDDS), a novel method for sampling from unnormalized probability densities and estimating their normalizing constants. PDDS builds on the concept of denoising diffusion models but incorporates a particle-based approach to achieve asymptotically consistent estimates under mild assumptions. The method uses a particle scheme that relies on a novel score matching loss to simulate the time-reversed diffusion process, which allows for sampling from the target distribution starting from Gaussian noise. Unlike standard denoising diffusion models, PDDS addresses the challenge of intractable score terms by approximating them using a particle-based approach, enabling more accurate and efficient sampling.
The paper discusses the theoretical foundations of PDDS, including the use of guided diffusions and the role of score matching in approximating the score terms. It also presents a detailed algorithm for PDDS, which involves a sequence of particle updates, weighting, and resampling steps to approximate the target distribution. The method is shown to provide consistent estimates of the normalizing constant and expectations with respect to the target distribution. Theoretical results are provided, demonstrating the asymptotic consistency of PDDS and its performance in various sampling tasks.
The article also explores the use of score matching to learn better approximations of the potential functions $g_k$, which are used to guide the diffusion process. This involves training neural networks to approximate the scores of the target distribution, which in turn improves the accuracy of the PDDS algorithm. The paper compares PDDS with other sampling methods, including SMC, CRAFT, and diffusion-based samplers, showing that PDDS achieves better performance in terms of estimation bias and variance, particularly in high-dimensional and multimodal sampling tasks.
Experimental results demonstrate that PDDS outperforms other methods in normalizing constant estimation and sample quality, especially in challenging tasks such as Gaussian mixture models with multiple modes. The method is also shown to be effective in handling multimodal and high-dimensional sampling problems, with the ability to produce samples that closely match the target distribution. The paper concludes that PDDS provides a robust and efficient approach to sampling from unnormalized distributions, with potential applications in a wide range of statistical and machine learning tasks.