Inferential equivalence and natural deduction

1957 ◽  
Vol 22 (3) ◽  
pp. 237-240
Author(s):  
H. Hiż

There are axiomatizations of the sentential calculus which, in effect, assert the inferential equivalence (mutual deducibility) of classes of sentences. A well known axiomatization of this sort consists ofAnother one, closely connected with A1–A3, is comprised ofB1–B3 jointly assert the inferential equivalence of a sentence of the form (p ⊃ q) ⊃ r and two sentences of the forms q ⊃ r and ∼p ⊃ r respectively. This axiomatization requires the rule of substitution and the rule of detachment for conditionals.The existence of such axiomatizations suggests a possibility of formulating natural deduction by means of inferential equivalence instead of the usual one-sided inference, customarily represented by ‘⊦’. ‘H’ will stand here for inferential equivalence. It may be looked upon as a combination of signs ‘⊦’ and ‘⊣’. ‘H’ is thought of as a metasystematic constant functor of two arguments each of which is a class of names of sentences. For purposes of this paper the first argument may be just a single name of a sentence, the second a pair (not necessarily ordered) of names of sentences. The role ‘H’ plays is determined by **1 –*5 below. Besides that ‘H’ remains here uninterpreted. In particular, it is not directly postulated that it is an identity relation.

1984 ◽  
Vol 49 (2) ◽  
pp. 329-333 ◽  
Author(s):  
Branislav R. Boričić

This note is written in reply to López-Escobar's paper [L-E] where a “sequence” of intermediate propositional systems NLCn (n ≥ 1) and corresponding implicative propositional systems NLICn (n ≥ 1) is given. We will show that the “sequence” NLCn contains three different systems only. These are the classical propositional calculus NLC1, Dummett's system NLC2 and the system NLC3. Accordingly (see [C], [Hs2], [Hs3], [B 1], [B2], [Hs4], [L-E]), the problem posed in the paper [L-E] can be formulated as follows: is NLC3a conservative extension of NLIC3? Having in mind investigations of intermediate propositional calculi that give more general results of this type (see V. I. Homič [H1], [H2], C. G. McKay [Mc], T. Hosoi [Hs 1]), in this note, using a result of Homič (Theorem 2, [H1]), we will give a positive solution to this problem.NLICnand NLCn. If X and Y are propositional logical systems, by X ⊆ Y we mean that the set of all provable formulas of X is included in that of Y. And X = Y means that X ⊆ Y and Y ⊆ X. A(P1/B1, …, Pn/Bn) is the formula (or the sequent) obtained from the formula (or the sequent) A by substituting simultaneously B1, …, Bn for the distinct propositional variables P1, …, Pn in A.Let Cn(n ≥ 1) be the string of the following sequents:Having in mind that the calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction (see [P]), the systems of natural deductions NLCn and NLICn (n ≥ 1), introduced in [L-E], can be identified with the calculi of sequents obtained by adding the sequents Cn as axioms to a sequential formulation of the Heyting propositional calculus and to a system of positive implication, respectively (see [C], [Ch], [K], [P]).


1977 ◽  
Vol 42 (1) ◽  
pp. 11-28 ◽  
Author(s):  
Jonathan P. Seldin

The sequent calculus formulation (L-formulation) of the theory of functionality without the rules allowing for conversion of subjects of [3, §14E6] fails because the (cut) elimination theorem (ET) fails. This can be most easily seen by the fact that it is easy to prove in the systemandbut not (as is obvious if α is an atomic type [an F-simple])The error in the “proof” of ET in [14, §3E6], [3, §14E6], and [7, §9C] occurs in Stage 3, where it is implicitly assumed that if [x]X ≡ [x] Y then X ≡ Y. In the above counterexample, we have [x]x ≡ ∣ ≡ [x](∣x) but x ≢ ∣x. Since the formulation of [2, §9F] is not really satisfactory (for reasons stated in [3, §14E]), a new seguent calculus formulation is needed for the case in which the rules for subject conversions are not present. The main part of this paper is devoted to presenting such a formulation and proving it equivalent to the natural deduction formulation (T-formulation). The paper will conclude in §6 with some remarks on the result that every ob (term) with a type (functional character) has a normal form.The conventions and definitions of [3], especially of §12D and Chapter 14, will be used throughout the paper.


1949 ◽  
Vol 14 (3) ◽  
pp. 167-172 ◽  
Author(s):  
Nathan P. Levin

We make certain notational conventions. These are referred to as CL (Computational Logic). The relation of interchangeability is introduced as the basic connection between logical formulas. This approach lends perspicuity to the results of sentential calculus. With its technical devices CL is able to rephrase logical theorems in rather succinct manner. Our exposition tries to steer a middle course between informality and strict rigor.It is felt that the method of CL offers advantages for the teaching of logic. Proofs are algorithmic and resemble those of elementary algebra. All inferences are reversible and practically non-tentative.Sign is a primitive term of CL. Its denning property is a capacity for entering into binary combination with other signs, or with itself, according to this convention:x and y are signs if, and only if, (x y) is a sign. Bracketing is looked upon as an operation on signs in terms of which other operations are definable. The practice of some authors in classifying brackets under the heading of symbols, seems to us questionable; for unlike symbols or signs brackets are never used to denote anything. Brackets enter into the composition of signs, not to denote a grouping, but rather to exhibit it, in the manner of a diagram.An unending list of letters, with or without subscripts,serve to denote arbitrary signs. An arbitrary sign may or may not have other signs as parts. The numeral “2” is used as a constant. Bracketing abbreviation is as follows:and so on. Outermost brackets will ordinarily be omitted. This kind of bracketing may be termed left-associative. For convenience, these bracketing conventions are crystallized into a rule.


1988 ◽  
Vol 53 (3) ◽  
pp. 673-695 ◽  
Author(s):  
Sidney C. Bailin

In this paper we present a normalization theorem for a natural deduction formulation of Zermelo set theory. Our result gets around M. Crabbe's counterexample to normalizability (Hallnäs [3]) by adding an inference rule of the formand requiring that this rule be used wherever it is applicable. Alternatively, we can regard the result as pertaining to a modified notion of normalization, in which an inferenceis never considered reducible if A is T Є T, even if R is an elimination rule and the major premise of R is the conclusion of an introduction rule. A third alternative is to regard (1) as a derived rule: using the general well-foundedness rulewe can derive (1). If we regard (2) as neutral with respect to the normality of derivations (i.e., (2) counts as neither an introduction nor an elimination rule), then the resulting proofs are normal.


1956 ◽  
Vol 21 (1) ◽  
pp. 52-55 ◽  
Author(s):  
Irving M. Copi

Since 1934 various different techniques for natural deduction have been developed by Gentzen, Jaśkowski, Rosser, Quine, and others (see [1], pp. 147–167; [2], especially footnotes 1, 3, and 4; and [3], pp. 75-83, 96-107, and 289-294). It has been pointed out to me by Professor Donald Kalish of U.C.L.A. that the restrictions placed upon Universal Generalization (UG) and Existential Instantiation (EI) in [3] force one to construct a less natural proof than seems desirable for such arguments as(cf. [3], p. 139, and [1], pp. 175f.) It is my purpose in this note to formulate an alternative restriction on UG which will permit a more natural proof for such arguments, and to prove the consistency of the altered rule.In the notation of [3], “The expression ‘Φμ’ will denote any propositional function in which there is at least one free occurrence of the variable denoted by ‘μ’. The expression ‘Φν’ will denote the result of replacing all free occurrences of μ in Φμ by ν, with the added proviso that when ν is a variable it must occur free in Φν at all places at which μ occurs free in Φμ.” ([3], p. 100.) The statement of EI is relatively unrestricted; beingprovided that ν is a variable which occurs free in no earlier step.” ([3], p. 104.)


Author(s):  
Carolyn Nohr ◽  
Ann Ayres

Texts on electron diffraction recommend that the camera constant of the electron microscope be determine d by calibration with a standard crystalline specimen, using the equation


Author(s):  
Kin Lam

The energy of moving ions in solid is dependent on the electronic density as well as the atomic structural properties of the target material. These factors contribute to the observable effects in polycrystalline material using the scanning ion microscope. Here we outline a method to investigate the dependence of low velocity proton stopping on interatomic distances and orientations.The interaction of charged particles with atoms in the frame work of the Fermi gas model was proposed by Lindhard. For a system of atoms, the electronic Lindhard stopping power can be generalized to the formwhere the stopping power function is defined as


Author(s):  
A. Kosiara ◽  
J. W. Wiggins ◽  
M. Beer

A magnetic spectrometer to be attached to the Johns Hopkins S. T. E. M. is under construction. Its main purpose will be to investigate electron interactions with biological molecules in the energy range of 40 KeV to 100 KeV. The spectrometer is of the type described by Kerwin and by Crewe Its magnetic pole boundary is given by the equationwhere R is the electron curvature radius. In our case, R = 15 cm. The electron beam will be deflected by an angle of 90°. The distance between the electron source and the pole boundary will be 30 cm. A linear fringe field will be generated by a quadrupole field arrangement. This is accomplished by a grounded mirror plate and a 45° taper of the magnetic pole.


Author(s):  
N. J. Zaluzec

The ultimate sensitivity of microchemical analysis using x-ray emission rests in selecting those experimental conditions which will maximize the measured peak-to-background (P/B) ratio. This paper presents the results of calculations aimed at determining the influence of incident beam energy, detector/specimen geometry and specimen composition on the P/B ratio for ideally thin samples (i.e., the effects of scattering and absorption are considered negligible). As such it is assumed that the complications resulting from system peaks, bremsstrahlung fluorescence, electron tails and specimen contamination have been eliminated and that one needs only to consider the physics of the generation/emission process.The number of characteristic x-ray photons (Ip) emitted from a thin foil of thickness dt into the solid angle dΩ is given by the well-known equation


Author(s):  
G. Cliff ◽  
M.J. Nasir ◽  
G.W. Lorimer ◽  
N. Ridley

In a specimen which is transmission thin to 100 kV electrons - a sample in which X-ray absorption is so insignificant that it can be neglected and where fluorescence effects can generally be ignored (1,2) - a ratio of characteristic X-ray intensities, I1/I2 can be converted into a weight fraction ratio, C1/C2, using the equationwhere k12 is, at a given voltage, a constant independent of composition or thickness, k12 values can be determined experimentally from thin standards (3) or calculated (4,6). Both experimental and calculated k12 values have been obtained for K(11<Z>19),kα(Z>19) and some Lα radiation (3,6) at 100 kV. The object of the present series of experiments was to experimentally determine k12 values at voltages between 200 and 1000 kV and to compare these with calculated values.The experiments were carried out on an AEI-EM7 HVEM fitted with an energy dispersive X-ray detector.


Sign in / Sign up

Export Citation Format

Share Document