Markov logic networks

Markov logic networks

2006 | Matthew Richardson · Pedro Domingos
Markov logic networks (MLNs) combine first-order logic and probabilistic graphical models. An MLN is a first-order knowledge base with weights on each formula. It specifies a ground Markov network with one feature per possible grounding of a formula, weighted by the formula's weight. Inference is done via MCMC over the minimal subset of the ground network needed for the query. Weights are learned efficiently from relational databases using pseudo-likelihood optimization. Additional clauses can be learned via inductive logic programming. Experiments with a university domain database show MLNs' promise over purely logical or probabilistic approaches. MLNs allow compact representation of large Markov networks and handle uncertainty, contradictions, and imperfect knowledge. They are used for tasks like collective classification, link prediction, and social network modeling. MLNs subsume propositional probabilistic models and can represent first-order logic as a special case with infinite weights. Inference is done via MCMC, and learning uses pseudo-likelihood optimization. Experiments on a university database show MLNs' effectiveness in link prediction tasks. MLNs can handle both logical and probabilistic reasoning, and their structure allows efficient inference and learning. They are applicable to various domains, including the Semantic Web and mass collaboration.Markov logic networks (MLNs) combine first-order logic and probabilistic graphical models. An MLN is a first-order knowledge base with weights on each formula. It specifies a ground Markov network with one feature per possible grounding of a formula, weighted by the formula's weight. Inference is done via MCMC over the minimal subset of the ground network needed for the query. Weights are learned efficiently from relational databases using pseudo-likelihood optimization. Additional clauses can be learned via inductive logic programming. Experiments with a university domain database show MLNs' promise over purely logical or probabilistic approaches. MLNs allow compact representation of large Markov networks and handle uncertainty, contradictions, and imperfect knowledge. They are used for tasks like collective classification, link prediction, and social network modeling. MLNs subsume propositional probabilistic models and can represent first-order logic as a special case with infinite weights. Inference is done via MCMC, and learning uses pseudo-likelihood optimization. Experiments on a university database show MLNs' effectiveness in link prediction tasks. MLNs can handle both logical and probabilistic reasoning, and their structure allows efficient inference and learning. They are applicable to various domains, including the Semantic Web and mass collaboration.
Reach us at info@futurestudyspace.com