Toward a calculus of concepts

1936 ◽  
Vol 1 (1) ◽  
pp. 2-25 ◽  
Author(s):  
W. V. Quine

By concepts will be meant propositions (or truth-values), attributes (or classes), and relations of all degrees. The degree of a concept will be said to be 0, 1, or n (> 1), and the concept will be said to be medadic, monadic, or n-adic, according as the concept is a proposition, an attribute, or an n-adic relation. The common procedure in systematizing logistic is to treat these successive degrees as ultimately separate categories. The partition is not rested upon properties of the thus classified elements within the formal system, but is imposed rather at the metamathematical level, through stipulations as to what combinations of signs are to be accorded or denied meaning. Each function of the formal system is restricted, thus metamathematically, to one degree for its values and to one for each of its arguments. The theory of types imports a further scheme of infinite partition, imposed by metamathematical stipulations as to the relative types of admissible arguments of the several functions and stipulations as to the types of the values of the functions relative to the types of the arguments.The elaborateness of the metamathematical grillwork which thus underlies formal logistic accounts in part for the tendency of those interested in logistic less for the matters treated than for the structures exemplified to limit their attention to the propositional calculus and the Boolean calculus of attributes (or classes), which, taken separately, are independent of the partitioning. A second reason for the algebraic appeal of these departments is their freedom from bound (apparent) variables: for use of bound variables fuses systematic considerations with notational or metamathematical ones in a way which resists ordinary formulation in terms of fixed functions and their arguments. Freedom from bound variables may be regarded, indeed, as the feature distinguishing algebra from analysis.

The flux integral for axisymmetric polar perturbations of static vacuum space-times, derived in an earlier paper directly from the relevant linearized Einstein equations, is rederived with the aid of the Einstein pseudo-tensor by a simple algorism. A similar earlier effort with the aid of the Landau–Lifshitz pseudo-tensor failed. The success with the Einstein pseudo-tensor is due to its special distinguishing feature that its second variation retains its divergence-free property provided only the equations governing the static space-time and its linear perturbations are satisfied. When one seeks the corresponding flux integral for Einstein‒Maxwell space-times, the common procedure of including, together with the pseudo-tensor, the energy‒momentum tensor of the prevailing electromagnetic field fails. But, a prescription due to R. Sorkin, of including instead a suitably defined ‘Noether operator’, succeeds.


Minerals ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 1083
Author(s):  
Maxim Oshchepkov ◽  
Konstantin Popov ◽  
Anna Kovalenko ◽  
Anatoly Redchuk ◽  
Julia Dikareva ◽  
...  

The primary nucleation mechanism of the gypsum in a bulk aqueous medium was identified as a heterogeneous one for 0.05 and 0.03 mol·L−1 CaSO4·2H2O solutions and 25 °C. By means of a particle counter and dynamic light scattering (DLS) technique, solid nano/microimpurities were found, and controlled in stock brines for gypsum supersaturated solutions preparation. It is demonstrated that the common procedure of reagent grade 0.10 mol·L‒1 CaCl2 and Na2SO4 aqueous solutions filtration via 200 nm membranes is capable to reduce the foreign solid microimpurities content (size > 100 nm) from 106 to 103 units in 1 mL, but fails to affect the more numerous nanofraction (size < 100 nm). Thus, the gypsum nucleation takes place in presence of a significant amount of “nano/microcodust” templates, and has a heterogenous character. The induction time, measured by conductivity for the similar supersaturation levels, reveals a well detectable dependence on nano/microdust concentent: an increasing background particle concentration substantially decreases the induction period at a constant saturation state and temperature, and thus increases the nucleation rate. Therefore, the gypsum nucleation reaction starts tentatively through the fast heterogeneous formation of well-defined, primary nuclei via [Ca2+], [SO42‒], and [CaSO4]o species sorption on the surface of “nano/microdust” particles. Thus, the “nano/microdust”, naturally occurring in any high purity chemical, plays a key role in sparingly soluble salts nucleation in the bulk aqueous medium.


2013 ◽  
Vol 427-429 ◽  
pp. 1917-1923
Author(s):  
Hong Lan Liu ◽  
De Zheng Zhang

The well formed formulas (wffs) in classical formal system of propositional calculus (CPC) are only some formal symbols, whose meanings are given by an interpretation. A probabilistic logic system, based on a probabilistic space, is an event semantics for CPC, in which set operations are the semantic interpretations for connectives, event functions are the semantic interpretations for wffs, the event (set) inclusion is the semantic interpretation for tautological implication, and the event equality = is the semantic interpretation for tautological equivalence. CPC is applicable to probabilistic propositions completely. Event calculus instead of truth value (probability) calculus can be performed in CPC because there arent truth value functions (operators) to interpret all connectives correctly.


1994 ◽  
Vol 59 (3) ◽  
pp. 830-837 ◽  
Author(s):  
Mingsheng Ying

Classical logic is not adequate to face the essential vagueness of human reasoning, which is approximate rather than precise in nature. The logical treatment of the concepts of vagueness and approximation is of increasing importance in artificial intelligence and related research. Consequently, many logicians have proposed different systems of many-valued logic as a formalization of approximate reasoning (see, for example, Goguen [G], Gerla and Tortora [GT], Novak [No], Pavelka [P], and Takeuti and Titani [TT]). As far as we know, all the proposals are obtained by extending the range of truth values of propositions. In these logical systems reasoning is still exact and to make a conclusion the antecedent clause of its rule must match its premise exactly. In addition. Wang [W] pointed out: “If we compare calculation with proving,... Procedures of calculation... can be made so by fairly well-developed methods of approximation; whereas... we do not have a clear conception of approximate methods in theorem proving.... The concept of approximate proofs, though undeniably of another kind than approximations in numerical calculations, is not incapable of more exact formulation in terms of, say, sketches of and gradual improvements toward a correct proof” (see pp, 224–225). As far as the author is aware, however, no attempts have been made to give a conception of approximate methods in theorem proving.The purpose of this paper is. unlike all the previous proposals, to develop a propositional calculus, a predicate calculus in which the truth values of propositions are still true or false exactly and in which the reasoning may be approximate and allow the antecedent clause of a rule to match its premise only approximately. In a forthcoming paper we shall establish set theory, based on the logic introduced here, in which there are ∣L∣ binary predicates ∈λ, λ ∈ L such that R(∈, ∈λ) = λ where ∈ stands for ∈1 and 1 is the greatest element in L, and x ∈λy is interpreted as that x belongs to y in the degree of λ, and relate it to intuitionistic fuzzy set theory of Takeuti and Titani [TT] and intuitionistic modal set theory of Lano [L]. In another forthcoming paper we shall introduce the resolution principle under approximate match and illustrate its applications in production systems of artificial intelligence.


2005 ◽  
Vol 70 (1) ◽  
pp. 282-318
Author(s):  
Lars Hansen

AbstractThe purpose of this paper is to present an algebraic generalization of the traditional two-valued logic. This involves introducing a theory of automorphism algebras, which is an algebraic theory of many-valued logic having a complete lattice as the set of truth values. Two generalizations of the two-valued case will be considered, viz., the finite chain and the Boolean lattice. In the case of the Boolean lattice, on choosing a designated lattice value, this algebra has binary retracts that have the usual axiomatic theory of the propositional calculus as suitable theory. This suitability applies to the Boolean algebra of formalized token models [2] where the truth values are, for example, vocabularies. Finally, as the actual motivation for this paper, we indicate how the theory of formalized token models [2] is an example of a many-valued predicate calculus.


2018 ◽  
Vol 12 (S 01) ◽  
pp. S45-S49
Author(s):  
Shawn D. Feinstein ◽  
Reid W. Draeger

AbstractWe report the case of a pediatric patient who underwent intra-arterial exploration and removal of foreign body after an arterial catheter cannula inadvertently fractured during removal and a fragment remained within the radial artery. The fragment was visualized using fluoroscopy intraoperatively and was successfully removed from the common digital artery to the index finger where it had migrated. We present the case as a rare complication of an exceedingly common procedure with a timely response to avoid further complication.


1957 ◽  
Vol 38 (2) ◽  
pp. 67-73 ◽  
Author(s):  
Seymour L. Hess

The determination of the LaPlacian of a mapped quantity, such as the geostrophic vorticity, is an important but a time consuming process. The common procedure involves a number of interpolations and an algebraic computation for each point at which the evaluation is made. A device is presented in which the slow process of interpolation is replaced by the faster and more reliable process of measurement of distance between isopleths of the mapped quantity, and the algebraic calculation is replaced by the rapid, automatic operation of a DC analog computer. The numerical result is presented as the deflection of a millivoltmeter. In practice the use of this computer proves to be a rapid, reproducible, sufficiently accurate means of determining the geostrophic vorticity from an analysis of the contours of an isobaric surface.


Author(s):  
Steven T. Kuhn

A simple puzzle leads Fine to conclude that we should distinguish between worldly sentences like “Socrates exists,” whose truth values depend on circumstances and unworldly ones like “Socrates is human,” which are true or false independently of circumstances. The former, if true in every circumstance, express necessary propositions. The latter, if true, express transcendental propositions, which, for theoretical convenience, we regard as necessary in an extended sense. Here it is argued that this understanding is backwards. Transcendental truths and sentences true in every circumstance (here labeled universal truths) are both species of necessary truth. The revised understanding is clarified by a simple formal system with distinct operators for necessary, transcendental, and universal truth. The system is axiomatized. Its universal-truth fragment coincides with something that Arthur Prior once proposed as System A. The ideas of necessary, transcendental truth are further clarified by considering their interaction with actual truth. Adding an operator for actually true to the formal system produces a system closely related to one of Crossley and Humberstone.


1966 ◽  
Vol 18 (3) ◽  
pp. 843-850 ◽  
Author(s):  
A. G. Devries

This study investigates the influence of sample size, non-replacement and replacement sampling on the number of MMPI items which will reach significance levels of .05, .01, and .001 purely by chance. The results show that replacement sampling of a VA NP population gives a more stable sampling distribution than non-replacement sampling. Larger sized samples seem to have a somewhat greater number of mean chance occurrences than smaller sized samples. The common procedure of requiring that more than 19 items of a 373-item MMPI have to reach significance at the .05 level before a non-chance occurrence is accepted is a definite underestimation according to the upper range limits of chance significances found in this study for samples of size 30, 40, 50, 100, 150, and 200. The best solution for a decision whether or not findings are non-chance occurrences would be to use a more stringent level of significance than the .05 level and co use the upper range limits of chance significances found in this study for the appropriate sample size. It is likely that different populations have associated with them different numbers of items which will reach chance significance.


1953 ◽  
Vol 49 (3) ◽  
pp. 367-376
Author(s):  
Alan Rose

In 1930 Łukasiewicz (3) developed an ℵ0-valued prepositional calculus with two primitives called implication and negation. The truth-values were all rational numbers satisfying 0 ≤ x ≤ 1, 1 being the designated truth-value. If the truth-values of P, Q, NP, CPQ are x, y, n(x), c(x, y) respectively, then


Sign in / Sign up

Export Citation Format

Share Document