A standard model-theoretic approach to operational semantics of recursive programs

1993 ◽  
Vol 8 (2) ◽  
pp. 155-161
Author(s):  
Shao Zhiqing
10.29007/f89j ◽  
2018 ◽  
Author(s):  
Vivek Nigam ◽  
Limin Jia ◽  
Anduo Wang ◽  
Boon Thau Loo ◽  
Andre Scedrov

Network Datalog (<i>NDlog</i>) is a recursive query language that extends Datalog by allowing programs to be distributed in a network. In our initial efforts to formally specify <i>NDlog</i>'s operational semantics, we have found several problems with the current evaluation algorithm being used, including unsound results, unintended multiple derivations of the same table entry, and divergence. In this paper, we make a first step towards correcting these problems by formally specifying a new operational semantics for <i>NDlog</i> and proving its correctness for the fragment of non-recursive programs. We also argue that if termination is guaranteed, then the results also extend to recursive programs. Finally, we identify a number of potential implementation improvements to <i>NDlog</i>.


2001 ◽  
Vol 16 (19) ◽  
pp. 3203-3216 ◽  
Author(s):  
HIROMI KASE ◽  
KATSUSADA MORITA ◽  
YOSHITAKA OKUMURA

Connes' gauge theory on M4 × Z2 is reformulated in the Lagrangian level. It is pointed out that the field strength in Connes' gauge theory is not unique. We explicitly construct a field strength different from Connes' and prove that our definition leads to the generation-number independent Higgs potential. It is also shown that the nonuniqueness is related to the assumption that two different extensions of the differential geometry are possible when the extra one-form basis χ is introduced to define the differential geometry on M4 × Z2. Our reformulation is applied to the standard model based on Connes' color-flavor algebra. A connection between the unimodularity condition and the electric charge quantization is then discussed in the presence or absence of νR.


2017 ◽  
Vol 23 (3) ◽  
pp. 296-323 ◽  
Author(s):  
ROSS T. BRADY

AbstractThis is a general account of metavaluations and their applications, which can be seen as an alternative to standard model-theoretic methodology. They work best for what are called metacomplete logics, which include the contraction-less relevant logics, with possible additions of Conjunctive Syllogism, (A→B) & (B→C) → .A→C, and the irrelevant, A→ .B→A, these including the logic MC of meaning containment which is arguably a good entailment logic. Indeed, metavaluations focus on the formula-inductive properties of theorems of entailment form A→B, splintering into two types, M1- and M2-, according to key properties of negated entailment theorems (see below). Metavaluations have an inductive presentation and thus have some of the advantages that model theory does, but they represent proof rather than truth and thus represent proof-theoretic properties, such as the priming property, if ├ A $\vee$ B then ├ A or ├ B, and the negated-entailment properties, not-├ ∼(A→B) (for M1-logics, with M1-metavaluations) and ├ ∼(A→B) iff ├ A and ├ ∼ B (for M2-logics, with M2-metavaluations). Topics to be covered are their impact on naive set theory and paradox solution, and also Peano arithmetic and Godel’s First and Second Theorems. Interesting to note here is that the familiar M1- and M2-metacomplete logics can be used to solve the set-theoretic paradoxes and, by inference, the Liar Paradox and key semantic paradoxes. For M1-logics, in particular, the final metavaluation that is used to prove the simple consistency is far simpler than its correspondent in the model-theoretic proof in that it consists of a limit point of a single transfinite sequence rather than that of a transfinite sequence of such limit points, as occurs in the model-theoretic approach. Additionally, it can be shown that Peano Arithmetic is simply consistent, using metavaluations that constitute finitary methods. Both of these results use specific metavaluational properties that have no correspondents in standard model theory and thus it would be highly unlikely that such model theory could prove these results in their final forms.


Author(s):  
E. A. Ashcroft ◽  
A. A. Faustini ◽  
R. Jaggannathan ◽  
W. W. Wadge

We know what a Lucid program means mathematically (see Chapter 3), yet that in itself does not suggest a particular model of computation for deriving the same meaning operationally. The purpose of this chapter is to consider the various ways that Lucid programs can be evaluated and to describe in detail the most appropriate model of computation, namely, eduction. Previously, we have seen that Lucid programs can be viewed globally in geometrical terms or locally in elemental terms. Both these views are equally valid as mental devices to enable the programmer to conceive and understand Lucid programs. And each view suggests its own family of computing models—extensional models that embody the global geometrical view and intensional models that embody the local elemental view. Before we compare these two approaches to evaluating Lucid programs, it is worth relating the operational semantics given by a model of computation to the mathematical semantics. Since Lucid is purely declarative, the correct meaning of a Lucid program is that which is given mathematically. This is done without appealing to any operational notions [8]. Thus, the mathematical semantics of a Lucid program has primacy over the many operational semantics that can be given to the Lucid program using different models of computations. Consequently, the correctness of a model of computation is determined by its ability to operationally give semantics to Lucid programs that coincide with their mathematical semantics. Let us consider an extensional model of computation called reduction [37]. It is the standard model for evaluating declarative programs, and more specifically, functional programs. In reduction, programs are considered to be expressions, and a program is evaluated by repeatedly transforming, or reducing, the expression into a possibly simpler expression. The original expression must include any data that the program is to work on, so that at every stage we are manipulating both program and data, and the two become intimately entwined. The process stops when no further transformation can be applied. At each stage, several transformations may be possible, but it doesn’t matter which we apply. If we get an answer, we always get the same answer, but it is possible to make choices so that we do not arrive at the answer.


1992 ◽  
Vol 2 (3) ◽  
pp. 273-321 ◽  
Author(s):  
Frank S. K. Silbermann ◽  
Bharat Jayaraman

AbstractThe integration of functional and logic programming languages has been a topic of great interest in the last decade. Many proposals have been made, yet none is completely satisfactory especially in the context of higher order functions and lazy evaluation. This paper addresses these shortcomings via a new approach: domain theory as a common basis for functional and logic programming. Our integrated language remains essentially within the functional paradigm. The logic programming capability is provided by set abstraction (via Zermelo-Frankel set notation), using the Herbrand universe as a set abstraction generator, but for efficiency reasons our proposed evaluation procedure treats this generator's enumeration parameter as a logical variable. The language is defined in terms of (computable) domain-theoretic constructions and primitives, using the lower (or angelic) powerdomain to model the set abstraction facility. The result is a simple, elegant and purely declarative language that successfully combines the most important features of both pure functional programming and pure Horn logic programming. Referential transparency with respect to the underlying mathematical model is maintained throughout. An implicitly correct operational semantics is obtained by direct execution of the denotational semantic definition, modified suitably to permit logical variables whenever the Herbrand universe is being generated within a set abstraction. Completeness of the operational semantics requires a form of parallel evaluation, rather than the more familiar left-most rule.


Author(s):  
Sterling P. Newberry

At the 1958 meeting of our society, then known as EMSA, the author introduced the concept of microspace and suggested its use to provide adequate information storage space and the use of electron microscope techniques to provide storage and retrieval access. At this current meeting of MSA, he wishes to suggest an additional use of the power of the electron microscope.The author has been contemplating this new use for some time and would have suggested it in the EMSA fiftieth year commemorative volume, but for page limitations. There is compelling reason to put forth this suggestion today because problems have arisen in the “Standard Model” of particle physics and funds are being greatly reduced just as we need higher energy machines to resolve these problems. Therefore, any techniques which complement or augment what we can accomplish during this austerity period with the machines at hand is worth exploring.


Sign in / Sign up

Export Citation Format

Share Document