27 January 2006 | Matthew Richardson · Pedro Domingos
The paper introduces Markov Logic Networks (MLNs), a novel approach that combines first-order logic and probabilistic graphical models into a single representation. MLNs are first-order knowledge bases with weights attached to each formula, which are used to construct a ground Markov network. Inference in MLNs is performed using Markov Chain Monte Carlo (MCMC) over the minimal subset of the ground network required for answering queries. Weights are learned from relational databases by optimizing a pseudo-likelihood measure. The paper also discusses the use of inductive logic programming techniques for learning additional clauses. Experiments with a real-world database and knowledge base in a university domain demonstrate the effectiveness of MLNs compared to purely logical and purely probabilistic approaches. The paper covers the fundamentals of Markov networks and first-order logic, introduces MLNs, presents algorithms for inference and learning, and evaluates their performance.The paper introduces Markov Logic Networks (MLNs), a novel approach that combines first-order logic and probabilistic graphical models into a single representation. MLNs are first-order knowledge bases with weights attached to each formula, which are used to construct a ground Markov network. Inference in MLNs is performed using Markov Chain Monte Carlo (MCMC) over the minimal subset of the ground network required for answering queries. Weights are learned from relational databases by optimizing a pseudo-likelihood measure. The paper also discusses the use of inductive logic programming techniques for learning additional clauses. Experiments with a real-world database and knowledge base in a university domain demonstrate the effectiveness of MLNs compared to purely logical and purely probabilistic approaches. The paper covers the fundamentals of Markov networks and first-order logic, introduces MLNs, presents algorithms for inference and learning, and evaluates their performance.