scholarly journals Horn Clause Contraction Functions

2013 ◽  
Vol 48 ◽  
pp. 475-511 ◽  
Author(s):  
J. P. Delgrande ◽  
R. Wassermann

In classical, AGM-style belief change, it is assumed that the underlying logic contains classical propositional logic. This is clearly a limiting assumption, particularly in Artificial Intelligence. Consequently there has been recent interest in studying belief change in approaches where the full expressivity of classical propositional logic is not obtained. In this paper we investigate belief contraction in Horn knowledge bases. We point out that the obvious extension to the Horn case, involving Horn remainder sets as a starting point, is problematic. Not only do Horn remainder sets have undesirable properties, but also some desirable Horn contraction functions are not captured by this approach. For Horn belief set contraction, we develop an account in terms of a model-theoretic characterisation involving weak remainder sets. Maxichoice and partial meet Horn contraction is specified, and we show that the problems arising with earlier work are resolved by these approaches. As well, constructions of the specific operators and sets of postulates are provided, and representation results are obtained. We also examine Horn package contraction, or contraction by a set of formulas. Again, we give a construction and postulate set, linking them via a representation result. Last, we investigate the closely-related notion of forgetting in Horn clauses. This work is arguably interesting since Horn clauses have found widespread use in AI; as well, the results given here may potentially be extended to other areas which make use of Horn-like reasoning, such as logic programming, rule-based systems, and description logics. Finally, since Horn reasoning is weaker than classical reasoning, this work sheds light on the foundations of belief change

Author(s):  
Adrian Haret ◽  
Johannes P. Wallner ◽  
Stefan Woltran

We study a type of change on knowledge bases inspired by the dynamics of formal argumentation systems, where the goal is to enforce acceptance of certain arguments. We put forward that enforcing acceptance of arguments can be viewed as a member of the wider family of belief change operations, and that an axiomatic treatment of it is therefore desirable. In our case, laying down axioms enables a precise account of the close connection between enforcing arguments and belief revision. Our analysis of enforcing arguments proceeds by (i) axiomatizing it as an operation in propositional logic and providing a representation result in terms of rankings on sets of interpretations, (ii) showing that it stands in close relationship to belief revision, and (iii) using it as a gateway towards a principled treatment of enforcement in abstract argumentation.


Author(s):  
Nadia Creignou ◽  
Adrian Haret ◽  
Odile Papini ◽  
Stefan Woltran

In line with recent work on belief change in fragments of propositional logic, we study belief update in the Horn fragment. We start from the standard KM postulates used to axiomatize belief update operators; these postulates lend themselves to semantic characterizations in terms of partial (resp. total) preorders on possible worlds. Since the Horn fragment is not closed under disjunction, the standard postulates have to be adapted for the Horn fragment. Moreover, a restriction on the preorders (i.e., Horn compliance) and additional postulates are needed to obtain sensible characterizations for the Horn fragment, and this leads to our main contribution: a representation result which shows that the class of update operators captured by Horn compliant partial (resp. total) preorders over possible worlds is precisely that given by the adapted and augmented Horn update postulates. With these results at hand, we provide concrete Horn update operators and are able to shed light on Horn revision operators based on partial preorders.


2017 ◽  
Vol 60 ◽  
pp. 1165-1213 ◽  
Author(s):  
James P. Delgrande

Forgetting is an operation on knowledge bases that has been addressed in different areas of Knowledge Representation and with respect to different formalisms, including classical propositional and first-order logic, modal logics, logic programming, and description logics. Definitions of forgetting have been expressed in terms of manipulation of formulas, sets of postulates, isomorphisms between models, bisimulations, second-order quantification, elementary equivalence, and others. In this paper, forgetting is regarded as an abstract belief change operator, independent of the underlying logic. The central thesis is that forgetting amounts to a reduction in the language, specifically the signature, of a logic. The main definition is simple: the result of forgetting a portion of a signature in a theory is given by the set of logical consequences of this theory over the reduced language. This definition offers several advantages. Foremost, it provides a uniform approach to forgetting, with a definition that is applicable to any logic with a well-defined consequence relation. Hence it generalises a disparate set of logic-specific definitions with a general, high-level definition. Results obtained in this approach are thus applicable to all subsumed formal systems, and many results are obtained much more straightforwardly. This view also leads to insights with respect to specific logics: for example, forgetting in first-order logic is somewhat different from the accepted approach. Moreover, the approach clarifies the relation between forgetting and related operations, including belief contraction.


2007 ◽  
Vol 72 (3) ◽  
pp. 994-1002 ◽  
Author(s):  
George Kourousias ◽  
David Makinson

AbstractThe splitting theorem says that any set of formulae has a finest representation as a family of letter-disjoint sets. Parikh formulated this for classical propositional logic, proved it in the finite case, used it to formulate a criterion for relevance in belief change, and showed that AGM partial meet revision can fail the criterion. In this paper we make three further contributions. We begin by establishing a new version of the well-known interpolation theorem, which we call parallel interpolation, use it to prove the splitting theorem in the infinite case, and show how AGM belief change operations may be modified, if desired, so as to ensure satisfaction of Parikh's relevance criterion.


2014 ◽  
Vol 51 ◽  
pp. 227-254 ◽  
Author(s):  
Z. Zhuang ◽  
M. Pagnucco

The AGM framework is the benchmark approach in belief change. Since the framework assumes an underlying logic containing classical Propositional Logic, it can not be applied to systems with a logic weaker than Propositional Logic. To remedy this limitation, several researchers have studied AGM-style contraction and revision under the Horn fragment of Propositional Logic (i.e., Horn logic). In this paper, we contribute to this line of research by investigating the Horn version of the AGM entrenchment-based contraction. The study is challenging as the construction of entrenchment-based contraction refers to arbitrary disjunctions which are not expressible under Horn logic. In order to adapt the construction to Horn logic, we make use of a Horn approximation technique called Horn strengthening. We provide a representation theorem for the newly constructed contraction which we refer to as entrenchment-based Horn contraction. Ideally, contractions defined under Horn logic (i.e., Horn contractions) should be as rational as AGM contraction. We propose the notion of Horn equivalence which intuitively captures the equivalence between Horn contraction and AGM contraction. We show that, under this notion, entrenchment-based Horn contraction is equivalent to a restricted form of entrenchment-based contraction.


Axioms ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 115 ◽  
Author(s):  
Joanna Golińska-Pilarek ◽  
Magdalena Welle

We study deduction systems for the weakest, extensional and two-valued non-Fregean propositional logic SCI . The language of SCI is obtained by expanding the language of classical propositional logic with a new binary connective ≡ that expresses the identity of two statements; that is, it connects two statements and forms a new one, which is true whenever the semantic correlates of the arguments are the same. On the formal side, SCI is an extension of classical propositional logic with axioms characterizing the identity connective, postulating that identity must be an equivalence and obey an extensionality principle. First, we present and discuss two types of systems for SCI known from the literature, namely sequent calculus and a dual tableau-like system. Then, we present a new dual tableau system for SCI and prove its soundness and completeness. Finally, we discuss and compare the systems presented in the paper.


2010 ◽  
Vol 3 (1) ◽  
pp. 41-70 ◽  
Author(s):  
ROGER D. MADDUX

Sound and complete semantics for classical propositional logic can be obtained by interpreting sentences as sets. Replacing sets with commuting dense binary relations produces an interpretation that turns out to be sound but not complete for R. Adding transitivity yields sound and complete semantics for RM, because all normal Sugihara matrices are representable as algebras of binary relations.


2021 ◽  
Vol 102 ◽  
pp. 02001
Author(s):  
Anja Wilhelm ◽  
Wolfgang Ziegler

The primary focus of technical communication (TC) in the past decade has been the system-assisted generation and utilization of standardized, structured, and classified content for dynamic output solutions. Nowadays, machine learning (ML) approaches offer a new opportunity to integrate unstructured data into existing knowledge bases without the need to manually organize information into topic-based content enriched with semantic metadata. To make the field of artificial intelligence (AI) more accessible for technical writers and content managers, cloud-based machine learning as a service (MLaaS) solutions provide a starting point for domain-specific ML modelling while unloading the modelling process from extensive coding, data processing and storage demands. Therefore, information architects can focus on information extraction tasks and on prospects to include pre-existing knowledge from other systems into the ML modelling process. In this paper, the capability and performance of a cloud-based ML service, IBM Watson, are analysed to assess their value for semantic context analysis. The ML model is based on a supervised learning method and features deep learning (DL) and natural language processing (NLP) techniques. The subject of the analysis is a corpus of scientific publications on the 2019 Coronavirus disease. The analysis focuses on information extractions regarding preventive measures and effects of the pandemic on healthcare workers.


Sign in / Sign up

Export Citation Format

Share Document