16 Aug 2024 | François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe
This paper presents a novel method for training diffusion models from incomplete and noisy observations using the Expectation-Maximization (EM) algorithm. The method addresses the challenge of learning proper diffusion models for Bayesian inverse problems when only observations are available. Unlike previous approaches, the proposed method leads to proper diffusion models, which are crucial for downstream tasks. The method involves an improved posterior sampling scheme for unconditional diffusion models, which is theoretically and empirically motivated.
The paper introduces a framework for learning diffusion models as priors for Bayesian inference. The key idea is to alternate between generating samples from the posterior distribution and training the prior model on these samples. The method is applied to various scientific applications, including gravitational lensing inversion, accelerated MRI, and data assimilation, where the observation is either too incomplete or too noisy to recover the latent variable directly.
The paper also presents a detailed analysis of diffusion models, focusing on their ability to generate plausible data from a distribution of interest. The method leverages the score function, which is approximated by a neural network trained via denoising score matching. The paper introduces a new posterior sampling scheme, called moment matching posterior sampling (MMPS), which improves the quality of posterior samples and leads to more accurate diffusion priors.
The results show that the proposed method outperforms previous approaches in terms of sample quality and convergence. The method is evaluated on three experiments: a low-dimensional manifold, corrupted CIFAR-10, and accelerated MRI. The results demonstrate that the method leads to more accurate and detailed samples compared to existing methods. The paper also discusses related work, highlighting the limitations of previous empirical Bayes methods and the potential of diffusion models for Bayesian inference. The method is shown to be effective in various scientific applications, including medical imaging and data assimilation.This paper presents a novel method for training diffusion models from incomplete and noisy observations using the Expectation-Maximization (EM) algorithm. The method addresses the challenge of learning proper diffusion models for Bayesian inverse problems when only observations are available. Unlike previous approaches, the proposed method leads to proper diffusion models, which are crucial for downstream tasks. The method involves an improved posterior sampling scheme for unconditional diffusion models, which is theoretically and empirically motivated.
The paper introduces a framework for learning diffusion models as priors for Bayesian inference. The key idea is to alternate between generating samples from the posterior distribution and training the prior model on these samples. The method is applied to various scientific applications, including gravitational lensing inversion, accelerated MRI, and data assimilation, where the observation is either too incomplete or too noisy to recover the latent variable directly.
The paper also presents a detailed analysis of diffusion models, focusing on their ability to generate plausible data from a distribution of interest. The method leverages the score function, which is approximated by a neural network trained via denoising score matching. The paper introduces a new posterior sampling scheme, called moment matching posterior sampling (MMPS), which improves the quality of posterior samples and leads to more accurate diffusion priors.
The results show that the proposed method outperforms previous approaches in terms of sample quality and convergence. The method is evaluated on three experiments: a low-dimensional manifold, corrupted CIFAR-10, and accelerated MRI. The results demonstrate that the method leads to more accurate and detailed samples compared to existing methods. The paper also discusses related work, highlighting the limitations of previous empirical Bayes methods and the potential of diffusion models for Bayesian inference. The method is shown to be effective in various scientific applications, including medical imaging and data assimilation.