markov logic networks
Recently Published Documents


TOTAL DOCUMENTS

120
(FIVE YEARS 3)

H-INDEX

13
(FIVE YEARS 0)

2022 ◽  
pp. 108158
Author(s):  
Zhimin Zhang ◽  
Tao Zhu ◽  
Dazhi Gao ◽  
Jiabo Xu ◽  
Hong Liu ◽  
...  

2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Evelina Lamma

AbstractProbabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.


Author(s):  
Timothy van Bremen ◽  
Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count of a sentence given an unweighted first-order model counting oracle. The algorithm has applications to inference in a variety of first-order probabilistic representations, such as Markov logic networks and probabilistic logic programs. Crucially for many applications, no assumptions are made on the form of the input sentence. Instead, the algorithm makes use of the symmetry inherent in the problem by imposing cardinality constraints on the number of possible true groundings of a sentence's literals. Realising the first-order model counting oracle in practice using the approximate hashing-based model counter ApproxMC3, we show how our algorithm is competitive with existing approximate and exact techniques for inference in first-order probabilistic models. We additionally provide PAC guarantees on the accuracy of the bounds generated.


Author(s):  
Congcong Ge ◽  
Yunjun Gao ◽  
Xiaoye Miao ◽  
Bin Yao ◽  
Haobo Wang

2020 ◽  
pp. 409-432
Author(s):  
Omar Adjali ◽  
Amar Ramdane-Cherif

This article describes a semantic framework that demonstrates an approach for modeling and reasoning based on environment knowledge representation language (EKRL) to enhance interaction between robots and their environment. Unlike EKRL, standard Binary approaches like OWL language fails to represent knowledge in an expressive way. The authors show in this work how to: model environment and interaction in an expressive way with first-order and second-order EKRL data-structures, and reason for decision-making thanks to inference capabilities based on a complex unification algorithm. This is with the understanding that robot environments are inherently subject to noise and partial observability, the authors extended EKRL framework with probabilistic reasoning based on Markov logic networks to manage uncertainty.


Author(s):  
Yuqiao Chen ◽  
Nicholas Ruozzi ◽  
Sriraam Natarajan

Lifted inference algorithms for first-order logic models, e.g., Markov logic networks (MLNs), have been of significant interest in recent years.  Lifted inference methods exploit model symmetries in order to reduce the size of the model and, consequently, the computational cost of inference.  In this work, we consider the problem of lifted inference in MLNs with continuous or both discrete and continuous groundings. Existing work on lifting with continuous groundings has mostly been limited to special classes of models, e.g., Gaussian models, for which variable elimination or message-passing updates can be computed exactly.  Here, we develop approximate lifted inference schemes based on particle sampling.  We demonstrate empirically that our approximate lifting schemes perform comparably to existing state-of-the-art for models for Gaussian MLNs, while having the flexibility to be applied to models with arbitrary potential functions.


Author(s):  
Mohammad Maminur Islam ◽  
Somdeb Sarkhel ◽  
Deepak Venugopal

We present a dense representation for Markov Logic Networks (MLNs) called Obj2Vec that encodes symmetries in the MLN structure. Identifying symmetries is a key challenge for lifted inference algorithms and we leverage advances in neural networks to learn symmetries which are hard to specify using hand-crafted features. Specifically, we learn an embedding for MLN objects that predicts the context of an object, i.e., objects that appear along with it in formulas of the MLN, since common contexts indicate symmetry in the distribution. Importantly, our formulation leverages well-known skip-gram models that allow us to learn the embedding efficiently. Finally, to reduce the size of the ground MLN, we sample objects based on their learned embeddings. We integrate Obj2Vec with several inference algorithms, and show the scalability and accuracy of our approach compared to other state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document