| Kevin P. Murphy and Yair Weiss and Michael I. Jordan
This paper presents an empirical study of loopy belief propagation (LBP) as an approximate inference method in Bayesian networks. The authors investigate whether LBP performs well in general settings or is special to error-correcting codes. They compare LBP with exact inference on four Bayesian network architectures, including real-world networks ALARM and QMR. LBP often converges and provides good approximations to correct marginals, but on the QMR network, it oscillates without clear relation to the correct posteriors. Initial investigations suggest that oscillations may be due to parameter settings, and simple methods to prevent them can lead to incorrect results.
The paper discusses the algorithm for LBP, which is an exact inference method for singly-connected networks but may not converge for multiply-connected ones. It describes the algorithm's message-passing steps and normalization procedures. The authors also compare LBP with likelihood weighting, a form of importance sampling, showing that LBP generally performs well in networks with multiple loops.
The study uses four networks: PYRAMID, toyQMR, ALARM, and QMR-DT. PYRAMID and toyQMR are synthetic networks, while ALARM and QMR-DT are real-world networks. The PYRAMID network is a hierarchical structure with binary nodes, and the toyQMR network is a bipartite structure with noisy-or links. The ALARM network is a Bayesian network for intensive care monitoring, and the QMR-DT network is a larger version of the toyQMR network with more nodes and complex inference.
The results show that LBP converges and provides accurate approximations on PYRAMID, toyQMR, and ALARM networks. However, on the QMR-DT network, LBP oscillates and does not converge. The authors investigate the causes of oscillations and find that small priors and weights can lead to oscillatory behavior. They also test the effectiveness of adding a momentum term to reduce oscillations, which works for some cases but not all.
The paper concludes that LBP can provide accurate posterior marginals in a broader context than error-correcting codes. However, it can exhibit oscillations in certain settings, and the results suggest that the behavior of LBP depends on the network structure and parameter values. The authors also note that while LBP is effective for some networks, it may not always provide accurate results, and further research is needed to understand the conditions under which it performs well.This paper presents an empirical study of loopy belief propagation (LBP) as an approximate inference method in Bayesian networks. The authors investigate whether LBP performs well in general settings or is special to error-correcting codes. They compare LBP with exact inference on four Bayesian network architectures, including real-world networks ALARM and QMR. LBP often converges and provides good approximations to correct marginals, but on the QMR network, it oscillates without clear relation to the correct posteriors. Initial investigations suggest that oscillations may be due to parameter settings, and simple methods to prevent them can lead to incorrect results.
The paper discusses the algorithm for LBP, which is an exact inference method for singly-connected networks but may not converge for multiply-connected ones. It describes the algorithm's message-passing steps and normalization procedures. The authors also compare LBP with likelihood weighting, a form of importance sampling, showing that LBP generally performs well in networks with multiple loops.
The study uses four networks: PYRAMID, toyQMR, ALARM, and QMR-DT. PYRAMID and toyQMR are synthetic networks, while ALARM and QMR-DT are real-world networks. The PYRAMID network is a hierarchical structure with binary nodes, and the toyQMR network is a bipartite structure with noisy-or links. The ALARM network is a Bayesian network for intensive care monitoring, and the QMR-DT network is a larger version of the toyQMR network with more nodes and complex inference.
The results show that LBP converges and provides accurate approximations on PYRAMID, toyQMR, and ALARM networks. However, on the QMR-DT network, LBP oscillates and does not converge. The authors investigate the causes of oscillations and find that small priors and weights can lead to oscillatory behavior. They also test the effectiveness of adding a momentum term to reduce oscillations, which works for some cases but not all.
The paper concludes that LBP can provide accurate posterior marginals in a broader context than error-correcting codes. However, it can exhibit oscillations in certain settings, and the results suggest that the behavior of LBP depends on the network structure and parameter values. The authors also note that while LBP is effective for some networks, it may not always provide accurate results, and further research is needed to understand the conditions under which it performs well.