introduction rule
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Owen Griffiths ◽  
Arif Ahmed

AbstractThe best-known syntactic account of the logical constants is inferentialism . Following Wittgenstein’s thought that meaning is use, inferentialists argue that meanings of expressions are given by introduction and elimination rules. This is especially plausible for the logical constants, where standard presentations divide inference rules in just this way. But not just any rules will do, as we’ve learnt from Prior’s famous example of tonk, and the usual extra constraint is harmony. Where does this leave identity? It’s usually taken as a logical constant but it doesn’t seem harmonious: standardly, the introduction rule (reflexivity) only concerns a subset of the formulas canvassed by the elimination rule (Leibniz’s law). In response, Read [5, 8] and Klev [3] amend the standard approach. We argue that both attempts fail, in part because of a misconception regarding inferentialism and identity that we aim to identify and clear up.


Author(s):  
Nils Kürbis

AbstractThis paper considers a formalisation of classical logic using general introduction rules and general elimination rules. It proposes a definition of ‘maximal formula’, ‘segment’ and ‘maximal segment’ suitable to the system, and gives reduction procedures for them. It is then shown that deductions in the system convert into normal form, i.e. deductions that contain neither maximal formulas nor maximal segments, and that deductions in normal form satisfy the subformula property. Tarski’s Rule is treated as a general introduction rule for implication. The general introduction rule for negation has a similar form. Maximal formulas with implication or negation as main operator require reduction procedures of a more intricate kind not present in normalisation for intuitionist logic.


Author(s):  
Neil Tennant

We explicate the different ways that a first-order sentence can be true (resp., false) in a model M, as formal objects, called (M-relative) truth-makers (resp., falsity-makers). M-relative truth-makers and falsity-makers are co-inductively definable, by appeal to the “atomic facts” in M, and to certain rules of verification and of falsification, collectively called rules of evaluation. Each logical operator has a rule of verification, much like an introduction rule; and a rule of falsification, much like an elimination rule. Applications of the rules (∀) and (∃) involve infinite furcation when the domain of M is infinite. But even in the infinite case, truth-makers and falsity-makers are tree-like objects whose branches are at most finitely long. A sentence φ is true (resp., false) in a model M (in the sense of Tarski) if and only if there existsπ such that π is an M-relative truth-maker (resp., falsity-maker) for φ. With “ways of being true” explicated as these logical truthmakers, one can re-conceive logical consequence between given premises and a conclusion. It obtains just in case there is a suitable method for transforming M-relative truthmakers for the premises into an M-relative truthmaker for the conclusion, whatever the model M may be.


Author(s):  
George J. Andreopoulos ◽  
Rosemary L. Barberet ◽  
Mahesh K. Nalla

2017 ◽  
Vol 23 (1) ◽  
pp. 83-104 ◽  
Author(s):  
В.О. Шангин

In the paper, we reconsider a precise definition of a natural deduction inference given by V. Smirnov. In refining the definition, we argue that all the other indirect rules of inference in a system can be considered as special cases of the implication introduction rule in a sense that if one of those rules can be applied then the implication introduction rule can be applied, either, but not vice versa. As an example, we use logics $I_{\langle\alpha, \beta\rangle}, \alpha, \beta \in \{0, 1, 2, 3,\dots \omega\}$, such that $I_{\langle 0, 0\rangle}$is propositional classical logic, presented by V. Popov. He uses these logics, in particular, a Hilbert-style calculus $HI_{\langle\alpha, \beta\rangle}, \alpha, \beta \in \{0, 1, 2, 3,\dots \omega\}$, for each logic in question, in order to construct examples of effects of Glivenko theorem’s generalization. Here we, first, propose a subordinated natural deduction system $NI_{\langle\alpha, \beta\rangle}, \alpha, \beta \in \{0, 1, 2, 3,\dots \omega\}$, for each logic in question, with a precise definition of a $NI_{\langle\alpha, \beta\rangle}$-inference. Moreover, we, comparatively, analyze precise and traditional definitions. Second, we prove that, for each $\alpha, \beta \in \{0, 1, 2, 3,\dots \omega\}$, a Hilbert-style calculus $HI_{\langle\alpha, \beta\rangle}$and a natural deduction system $NI_{\langle\alpha, \beta\rangle}$are equipollent, that is, a formula $A$ is provable in $HI_{\langle\alpha, \beta\rangle}$iff $A$ is provable in $NI_{\langle\alpha, \beta\rangle}$. DOI: 10.21146/2074-1472-2017-23-1-83-104


2009 ◽  
Vol 36 ◽  
pp. 165-228 ◽  
Author(s):  
B. Motik ◽  
R. Shearer ◽  
I. Horrocks

We present a novel reasoning calculus for the description logic SHOIQ^+---a knowledge representation formalism with applications in areas such as the Semantic Web. Unnecessary nondeterminism and the construction of large models are two primary sources of inefficiency in the tableau-based reasoning calculi used in state-of-the-art reasoners. In order to reduce nondeterminism, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. In order to reduce the size of the constructed models, we introduce anywhere pairwise blocking. We also present an improved nominal introduction rule that ensures termination in the presence of nominals, inverse roles, and number restrictions---a combination of DL constructs that has proven notoriously difficult to handle. Our implementation shows significant performance improvements over state-of-the-art reasoners on several well-known ontologies.


1988 ◽  
Vol 53 (3) ◽  
pp. 673-695 ◽  
Author(s):  
Sidney C. Bailin

In this paper we present a normalization theorem for a natural deduction formulation of Zermelo set theory. Our result gets around M. Crabbe's counterexample to normalizability (Hallnäs [3]) by adding an inference rule of the formand requiring that this rule be used wherever it is applicable. Alternatively, we can regard the result as pertaining to a modified notion of normalization, in which an inferenceis never considered reducible if A is T Є T, even if R is an elimination rule and the major premise of R is the conclusion of an introduction rule. A third alternative is to regard (1) as a derived rule: using the general well-foundedness rulewe can derive (1). If we regard (2) as neutral with respect to the normality of derivations (i.e., (2) counts as neither an introduction nor an elimination rule), then the resulting proofs are normal.


Sign in / Sign up

Export Citation Format

Share Document