Approximate Bayesian Computational methods*

Approximate Bayesian Computational methods*

May 30, 2011 | Jean-Michel Marin†, Pierre Pudlo, Christian P. Robert, Robin J. Ryder
Approximate Bayesian Computation (ABC) methods, also known as likelihood-free methods, have become a prominent approach for Bayesian inference in cases where the likelihood function is intractable. These methods are particularly useful in genetics and other fields where the likelihood is either unavailable or computationally infeasible to calculate. ABC methods approximate the posterior distribution by simulating data from the model and comparing it to the observed data using summary statistics and a tolerance level. The original ABC algorithm, introduced by Tavaré et al. (1997), uses a rejection sampling approach to generate samples from the posterior distribution. However, this method is sensitive to the choice of summary statistics and tolerance level, and its performance can be improved through various extensions and refinements. Recent developments in ABC include the use of MCMC-ABC, which incorporates Markov chain Monte Carlo (MCMC) techniques to improve the efficiency of the algorithm. Additionally, sequential ABC methods, such as ABC-PMC and ABC-SMC, have been proposed to enhance the accuracy of the approximation by using a sequence of simulations with decreasing tolerance levels. These methods also incorporate importance sampling and adaptive resampling strategies to improve the convergence of the algorithm. The calibration of ABC methods is a critical aspect of their application, as the choice of summary statistics and tolerance level significantly affects the accuracy of the approximation. Post-processing techniques, such as local linear regression and nonlinear regression, have been developed to improve the accuracy of ABC results by adjusting the simulated parameters based on the observed data. These methods help mitigate the impact of the tolerance level on the approximation of the posterior distribution. In the context of model choice, ABC methods have been applied to compare different models by simulating data from each model and using the posterior probabilities of the models to determine the best fit. However, the accuracy of these approximations can be affected by the choice of summary statistics and the tolerance level, and further research is needed to improve the performance of ABC methods in model choice scenarios. Overall, ABC methods provide a flexible and practical approach to Bayesian inference in cases where the likelihood function is intractable, but their effectiveness depends on careful calibration and the choice of appropriate summary statistics and tolerance levels.Approximate Bayesian Computation (ABC) methods, also known as likelihood-free methods, have become a prominent approach for Bayesian inference in cases where the likelihood function is intractable. These methods are particularly useful in genetics and other fields where the likelihood is either unavailable or computationally infeasible to calculate. ABC methods approximate the posterior distribution by simulating data from the model and comparing it to the observed data using summary statistics and a tolerance level. The original ABC algorithm, introduced by Tavaré et al. (1997), uses a rejection sampling approach to generate samples from the posterior distribution. However, this method is sensitive to the choice of summary statistics and tolerance level, and its performance can be improved through various extensions and refinements. Recent developments in ABC include the use of MCMC-ABC, which incorporates Markov chain Monte Carlo (MCMC) techniques to improve the efficiency of the algorithm. Additionally, sequential ABC methods, such as ABC-PMC and ABC-SMC, have been proposed to enhance the accuracy of the approximation by using a sequence of simulations with decreasing tolerance levels. These methods also incorporate importance sampling and adaptive resampling strategies to improve the convergence of the algorithm. The calibration of ABC methods is a critical aspect of their application, as the choice of summary statistics and tolerance level significantly affects the accuracy of the approximation. Post-processing techniques, such as local linear regression and nonlinear regression, have been developed to improve the accuracy of ABC results by adjusting the simulated parameters based on the observed data. These methods help mitigate the impact of the tolerance level on the approximation of the posterior distribution. In the context of model choice, ABC methods have been applied to compare different models by simulating data from each model and using the posterior probabilities of the models to determine the best fit. However, the accuracy of these approximations can be affected by the choice of summary statistics and the tolerance level, and further research is needed to improve the performance of ABC methods in model choice scenarios. Overall, ABC methods provide a flexible and practical approach to Bayesian inference in cases where the likelihood function is intractable, but their effectiveness depends on careful calibration and the choice of appropriate summary statistics and tolerance levels.
Reach us at info@study.space
[slides] Approximate Bayesian computational methods | StudySpace