Formalization of the Equivalent Transformation Computation Model

Author(s):  
Kiyoshi Akama ◽  
◽  
Ekawit Nantajeewarawat ◽  

In the equivalent transformation (ET) computation model, a specification provides background knowledge in a problem domain and defines a set of queries of interest. A program is a set of prioritized transformation rules, and computation consists in successive reduction of queries using meaning-preserving transformation with respect to given background knowledge. We present a formalization of the ET model from the viewpoint of program synthesis, where not only computation but also program correctness and correctness relations are of central importance. The notion of program correctness defines “what it means for a program to be correct with respect to a specification,” and a correctness relation provides guidance on “how to obtain such a program.” The correctness relation of the ET model is established, based on which how the basic structure of the ET model facilitates program synthesis is discussed together with program synthesis strategies in this model.

Author(s):  
Takahiko Ishikawa ◽  
◽  
Kiyoshi Akama ◽  
Hiroshi Mabuchi ◽  
◽  
...  

In the computation model of equivalent transformation (ET), problems are expressed by some declarative descriptions. Programs, which consist of equivalent transformation rules (ETRs), are made from the declarative descriptions, and applied to questions to solve them. The ET model can achieve various and efficient ways of problem-solving mainly due to the expressive power and priorities of ETRs. In this paper, we investigate and demonstrate, by solving a sample problem, how to make programs from problem descriptions in the ET paradigm. We introduce basic methods of generation and improvement of rules seeking for desirable ETRs. We can transform ETRs, preserving correctness of computation, through many manipulative techniques, i.e., changing from nondeterministic atoms into sequentially executable atoms, introducing multi-head rules, and adjusting priority of rules, by which we can effectively improve correct programs into both correct and more efficient programs.


Author(s):  
Yoshinori Shigeta ◽  
◽  
Kiyoshi Akama ◽  
Hiroshi Mabuchi ◽  
Hidekatsu Koike ◽  
...  

We present a way to convert constraint handling rules (CHRs) to equivalent transformation rules (ETRs) and demonstrate the correctness of the conversion in equivalent transformation (ET) theory. In the ET computation model, computation is regarded as equivalent transformations of a description. A description is transformed successively by ETRs. Extensively used in the domain of first-order terms, the ET computation model has also been applied to knowledge processing in such data domains as RDF, UML, and XML. A CHR is a multiheaded guarded rule that rewrites constraints into simpler ones until they are solved. CHRs and ETRs are similar in syntax but they have completely different theoretical bases for the correctness of their computation. CHRs are based on the logical equivalence of logical formulas, while ETRs are based on the set equivalence of descriptions. We convert CHRs to rules used in the ET model and demonstrate converted rules to be correct ETRs, i.e., they preserve meanings of descriptions. We discuss correspondences and differences between CHRs and ETRs in theories, giving examples of correct ETRs that cannot be represented as CHRs.


Author(s):  
Itaru Takarajima ◽  
◽  
Kiyoshi Akama ◽  
Ikumi Imani ◽  
Hiroshi Mabuchi ◽  
...  

We studied the termination of nondeterministic programs, which play an important, basic role in program synthesis, and provide a foundation for proving program termination. The class of programs we consider here is based on the <I>equivalent transformation (ET)</I> computation model. Computation with this model involves the succesive application of rules to sets of clauses. Since the ET model has more expressive terms and rules than other programming languages, program termination is difficult to prove: we must take into account all possible changes of terms made by various rules. We propose a sufficient condition of ET program termination that proves termination by checking each rule in a given program. We also provide an algorithm for this check.


1981 ◽  
Vol 4 (1) ◽  
pp. 151-172
Author(s):  
Pierangelo Miglioli ◽  
Mario Ornaghi

The aim of this paper is to provide a general explanation of the “algorithmic content” of proofs, according to a point of view adequate to computer science. Differently from the more usual attitude of program synthesis, where the “algorithmic content” is captured by translating proofs into standard algorithmic languages, here we propose a “direct” interpretation of “proofs as programs”. To do this, a clear explanation is needed of what is to be meant by “proof-execution”, a concept which must generalize the usual “program-execution”. In the first part of the paper we discuss the general conditions to be satisfied by the executions of proofs and consider, as a first example of proof-execution, Prawitz’s normalization. According to our analysis, simple normalization is not fully adequate to the goals of the theory of programs: so, in the second section we present an execution-procedure based on ideas more oriented to computer science than Prawitz’s. We provide a soundness theorem which states that our executions satisfy an appropriate adequacy condition, and discuss the sense according to which our “proof-algorithms” inherently involve parallelism and non determinism. The Properties of our computation model are analyzed and also a completeness theorem involving a notion of “uniform evaluation” of open formulas is stated. Finally, an “algorithmic completeness” theorem is given, which essentially states that every flow-chart program proved to be totally correct can be simulated by an appropriate “purely logical proof”.


2007 ◽  
Vol 293 (5) ◽  
pp. H2667-H2679 ◽  
Author(s):  
Charlotte Hwa ◽  
William C. Aird

In 1628, William Harvey provided definitive evidence that blood circulates. The notion that blood travels around the body in a circle raised the important question of how nutrients pass between blood and underlying tissue. Perhaps, Harvey posited, arterial blood pours into the flesh as into a sponge, only then to find its way into the veins. Far from solving this problem, Marcello Malpighi's discovery of the capillaries in 1661 only added to the dilemma: surely, some argued, these entities are little more than channels drilled into tissues around them. As we discuss in this review, it would take over 200 years to arrive at a consensus on the basic structure and function of the capillary wall. A consideration of the history of this period provides interesting insights into not only the central importance of the capillary as a focus of investigation, but also the enormous challenges associated with studying these elusive structures.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Xiang Gao ◽  
Arjun Radhakrishna ◽  
Gustavo Soares ◽  
Ridwan Shariffdeen ◽  
Sumit Gulwani ◽  
...  

Use of third-party libraries is extremely common in application software. The libraries evolve to accommodate new features or mitigate security vulnerabilities, thereby breaking the Application Programming Interface(API) used by the software. Such breaking changes in the libraries may discourage client code from using the new library versions thereby keeping the application vulnerable and not up-to-date. We propose a novel output-oriented program synthesis algorithm to automate API usage adaptations via program transformation. Our aim is not only to rely on the few example human adaptations of the clients from the old library version to the new library version, since this can lead to over-fitting transformation rules. Instead, we also rely on example usages of the new updated library in clients, which provide valuable context for synthesizing and applying the transformation rules. Our tool APIFix provides an automated mechanism to transform application code using the old library versions to code using the new library versions - thereby achieving automated API usage adaptation to fix the effect of breaking changes. Our evaluation shows that the transformation rules inferred by APIFix achieve 98.7% precision and 91.5% recall. By comparing our approach to state-of-the-art program synthesis approaches, we show that our approach significantly reduces over-fitting while synthesizing transformation rules for API usage adaptations.


2010 ◽  
Vol 20 (2) ◽  
pp. 229-236 ◽  
Author(s):  
Branko Markoski ◽  
Petar Hotomski ◽  
Dusan Malbaski ◽  
Danilo Obradovic

Proving the program correctness and designing the correct programs are two connected theoretical problems, which are of great practical importance. The first is solved within program analysis, and the second one in program synthesis, although intertwining of these two processes is often due to connection between the analysis and synthesis of programs. Nevertheless, having in mind the automated methods of proving correctness and methods of automatic program synthesis, the difference is easy to tell. This paper presents denotative interpretation of programming calculation explaining semantics by formulae ? and ?, in such a way that they can be used for defining state sets for program P.


Author(s):  
Hiroshi Mabuchi ◽  
◽  
Kiyoshi Akama ◽  
Takahiko Ishikawa ◽  
Hidekatsu Koike ◽  
...  

Making an efficient algorithm for natural language understanding by means of flexible and cooperative interaction between syntactic analysis and semantic interpretation is very difficult. In order to overcome the difficulties, the present paper proposes a new method for designing knowledge processing systems, the computation of which is based on equivalent transformation of declarative descriptions. Basic procedures for syntactic analysis and semantic interpretation are formalized as mutually independent equivalent transformation rules. Rule selection is dynamically determined flexibly during execution by a general principle independent of the domain of sentences.


Sign in / Sign up

Export Citation Format

Share Document