Marginal Likelihood From the Metropolis–Hastings Output

Marginal Likelihood From the Metropolis–Hastings Output

March 2001, Vol. 96, No. 453, Theory and Methods | Siddhartha CHIB and Ivan JELIAZKOV
This article presents a framework for estimating marginal likelihoods to facilitate Bayesian model comparisons. It extends and improves upon Chib's (1995) method by addressing issues related to intractable full conditional densities. The proposed method leverages MCMC chains generated by the Metropolis–Hastings algorithm, using both sampling and marginal likelihood estimation. The approach is demonstrated through experiments with various models, including logit models for binary data, hierarchical random effects models for clustered Gaussian data, Poisson regression models for clustered count data, and multivariate probit models for correlated binary data. These examples show that the method is practical and widely applicable. The article discusses the estimation of posterior ordinates, the decomposition of the posterior ordinate, and the numerical standard error of the marginal likelihood estimate. It also explores the impact of different MCMC designs, such as proposal densities, blocking schemes, and sample sizes, on the efficiency and accuracy of the marginal likelihood estimation. Overall, the method is robust to changes in simulation sample sizes, blocking schemes, and sampling methods, with lower numerical standard errors associated with schemes that produce lower inefficiency factors.This article presents a framework for estimating marginal likelihoods to facilitate Bayesian model comparisons. It extends and improves upon Chib's (1995) method by addressing issues related to intractable full conditional densities. The proposed method leverages MCMC chains generated by the Metropolis–Hastings algorithm, using both sampling and marginal likelihood estimation. The approach is demonstrated through experiments with various models, including logit models for binary data, hierarchical random effects models for clustered Gaussian data, Poisson regression models for clustered count data, and multivariate probit models for correlated binary data. These examples show that the method is practical and widely applicable. The article discusses the estimation of posterior ordinates, the decomposition of the posterior ordinate, and the numerical standard error of the marginal likelihood estimate. It also explores the impact of different MCMC designs, such as proposal densities, blocking schemes, and sample sizes, on the efficiency and accuracy of the marginal likelihood estimation. Overall, the method is robust to changes in simulation sample sizes, blocking schemes, and sampling methods, with lower numerical standard errors associated with schemes that produce lower inefficiency factors.
Reach us at info@study.space