This paper proposes a method to improve diffusion models for solving inverse problems by optimizing the posterior covariance. The authors show that recent methods can be interpreted as approximating the conditional posterior mean using a Gaussian distribution with hand-crafted isotropic covariance. They propose using a more principled covariance determined by maximum likelihood estimation to enhance these methods. The proposed method provides general plug-and-play solutions for optimizing posterior covariance without retraining. They also propose a scalable method for learning posterior covariance prediction based on orthonormal basis representations. Experimental results show that the proposed methods significantly improve reconstruction performance without requiring hyperparameter tuning. The paper also discusses the unified interpretation of diffusion-based solvers for inverse problems and presents two plug-and-play methods for posterior covariance optimization. The first method leverages reverse covariance prediction from pre-trained models, while the second method uses Monte Carlo estimation without reverse covariance. The authors also propose a method for modeling pixel correlations using latent variances. The experiments show that the proposed methods outperform existing methods in tasks such as inpainting, deblurring, and super-resolution. The paper concludes that the proposed methods significantly improve diffusion models for inverse problems and eliminate the need for hyperparameter tuning.This paper proposes a method to improve diffusion models for solving inverse problems by optimizing the posterior covariance. The authors show that recent methods can be interpreted as approximating the conditional posterior mean using a Gaussian distribution with hand-crafted isotropic covariance. They propose using a more principled covariance determined by maximum likelihood estimation to enhance these methods. The proposed method provides general plug-and-play solutions for optimizing posterior covariance without retraining. They also propose a scalable method for learning posterior covariance prediction based on orthonormal basis representations. Experimental results show that the proposed methods significantly improve reconstruction performance without requiring hyperparameter tuning. The paper also discusses the unified interpretation of diffusion-based solvers for inverse problems and presents two plug-and-play methods for posterior covariance optimization. The first method leverages reverse covariance prediction from pre-trained models, while the second method uses Monte Carlo estimation without reverse covariance. The authors also propose a method for modeling pixel correlations using latent variances. The experiments show that the proposed methods outperform existing methods in tasks such as inpainting, deblurring, and super-resolution. The paper concludes that the proposed methods significantly improve diffusion models for inverse problems and eliminate the need for hyperparameter tuning.