Loopy Belief Propagation for Approximate Inference: An Empirical Study

Loopy Belief Propagation for Approximate Inference: An Empirical Study

| Kevin P. Murphy and Yair Weiss and Michael I. Jordan
The paper "Loopy Belief Propagation for Approximate Inference: An Empirical Study" by Kevin P. Murphy, Yair Weiss, and Michael I. Jordan investigates the performance of loopy belief propagation (LBP) in various Bayesian network architectures. LBP, which is based on Pearl's polytree algorithm, is known to perform well in error-correcting codes, particularly in Turbo Codes. The authors explore whether LBP can also work effectively in a broader range of settings beyond error-correcting codes. The study compares the marginals computed using LBP to the exact marginals in four Bayesian network architectures: PYRAMID, toyQMR, ALARM, and QMR-DT. For the PYRAMID, toyQMR, and ALARM networks, LBP converges and provides good approximations to the correct marginals. However, on the QMR-DT network, LBP oscillates and does not converge, leading to poor results. The authors investigate the causes of oscillations in the QMR-DT network and find that they are not solely due to the size of the network or the number of parents. They also explore methods to prevent oscillations, such as adding momentum to the message passing equations, but these methods do not always improve the accuracy of the marginals. The results suggest that LBP can provide accurate posterior marginals in a more general setting than error-correcting codes, but it is important to check for convergence before using LBP for inference. The study highlights the need for caution when applying LBP to networks with loops, as oscillations can occur and lead to poor approximations.The paper "Loopy Belief Propagation for Approximate Inference: An Empirical Study" by Kevin P. Murphy, Yair Weiss, and Michael I. Jordan investigates the performance of loopy belief propagation (LBP) in various Bayesian network architectures. LBP, which is based on Pearl's polytree algorithm, is known to perform well in error-correcting codes, particularly in Turbo Codes. The authors explore whether LBP can also work effectively in a broader range of settings beyond error-correcting codes. The study compares the marginals computed using LBP to the exact marginals in four Bayesian network architectures: PYRAMID, toyQMR, ALARM, and QMR-DT. For the PYRAMID, toyQMR, and ALARM networks, LBP converges and provides good approximations to the correct marginals. However, on the QMR-DT network, LBP oscillates and does not converge, leading to poor results. The authors investigate the causes of oscillations in the QMR-DT network and find that they are not solely due to the size of the network or the number of parents. They also explore methods to prevent oscillations, such as adding momentum to the message passing equations, but these methods do not always improve the accuracy of the marginals. The results suggest that LBP can provide accurate posterior marginals in a more general setting than error-correcting codes, but it is important to check for convergence before using LBP for inference. The study highlights the need for caution when applying LBP to networks with loops, as oscillations can occur and lead to poor approximations.
Reach us at info@study.space
[slides] Loopy Belief Propagation for Approximate Inference%3A An Empirical Study | StudySpace