Integration of Syntactic Analysis and Semantic Interpretation Based on Equivalent Transformation

Author(s):  
Hiroshi Mabuchi ◽  
◽  
Kiyoshi Akama ◽  
Takahiko Ishikawa ◽  
Hidekatsu Koike ◽  
...  

Making an efficient algorithm for natural language understanding by means of flexible and cooperative interaction between syntactic analysis and semantic interpretation is very difficult. In order to overcome the difficulties, the present paper proposes a new method for designing knowledge processing systems, the computation of which is based on equivalent transformation of declarative descriptions. Basic procedures for syntactic analysis and semantic interpretation are formalized as mutually independent equivalent transformation rules. Rule selection is dynamically determined flexibly during execution by a general principle independent of the domain of sentences.

Author(s):  
Yoshinori Shigeta ◽  
◽  
Kiyoshi Akama ◽  
Hiroshi Mabuchi ◽  
Hidekatsu Koike ◽  
...  

We present a way to convert constraint handling rules (CHRs) to equivalent transformation rules (ETRs) and demonstrate the correctness of the conversion in equivalent transformation (ET) theory. In the ET computation model, computation is regarded as equivalent transformations of a description. A description is transformed successively by ETRs. Extensively used in the domain of first-order terms, the ET computation model has also been applied to knowledge processing in such data domains as RDF, UML, and XML. A CHR is a multiheaded guarded rule that rewrites constraints into simpler ones until they are solved. CHRs and ETRs are similar in syntax but they have completely different theoretical bases for the correctness of their computation. CHRs are based on the logical equivalence of logical formulas, while ETRs are based on the set equivalence of descriptions. We convert CHRs to rules used in the ET model and demonstrate converted rules to be correct ETRs, i.e., they preserve meanings of descriptions. We discuss correspondences and differences between CHRs and ETRs in theories, giving examples of correct ETRs that cannot be represented as CHRs.


Author(s):  
John Carroll

This article introduces the concepts and techniques for natural language (NL) parsing, which signifies, using a grammar to assign a syntactic analysis to a string of words, a lattice of word hypotheses output by a speech recognizer or similar. The level of detail required depends on the language processing task being performed and the particular approach to the task that is being pursued. This article further describes approaches that produce ‘shallow’ analyses. It also outlines approaches to parsing that analyse the input in terms of labelled dependencies between words. Producing hierarchical phrase structure requires grammars that have at least context-free (CF) power. CF algorithms that are widely used in parsing of NL are described in this article. To support detailed semantic interpretation more powerful grammar formalisms are required, but these are usually parsed using extensions of CF parsing algorithms. Furthermore, this article describes unification-based parsing. Finally, it discusses three important issues that have to be tackled in real-world applications of parsing: evaluation of parser accuracy, parser efficiency, and measurement of grammar/parser coverage.


Author(s):  
Takahiko Ishikawa ◽  
◽  
Kiyoshi Akama ◽  
Hiroshi Mabuchi ◽  
◽  
...  

In the computation model of equivalent transformation (ET), problems are expressed by some declarative descriptions. Programs, which consist of equivalent transformation rules (ETRs), are made from the declarative descriptions, and applied to questions to solve them. The ET model can achieve various and efficient ways of problem-solving mainly due to the expressive power and priorities of ETRs. In this paper, we investigate and demonstrate, by solving a sample problem, how to make programs from problem descriptions in the ET paradigm. We introduce basic methods of generation and improvement of rules seeking for desirable ETRs. We can transform ETRs, preserving correctness of computation, through many manipulative techniques, i.e., changing from nondeterministic atoms into sequentially executable atoms, introducing multi-head rules, and adjusting priority of rules, by which we can effectively improve correct programs into both correct and more efficient programs.


1975 ◽  
Vol 30 ◽  
pp. 1-24
Author(s):  
I. Batoni ◽  
R. Henning ◽  
H. Lehmann ◽  
B. Schirmer ◽  
M. Zoeppritz

Abstract LIANA is a question answering system in PL/1. The program takes German natural language input and, by morphological, syntactic and semantic analysis, creates a representation of the text, which is stored and can be accessed for retrieval purposes. All individuals (objects) mentioned in the sentence are found and stored. In continuous text, therefore, information about individuals can be piled up successively. LIANA uses the programming concept of the Boston Syntax Analyzer. Therefore, the output of syntactic analysis is a tree structure, simulated through pointers which connect the nodes in the tree. Each node is associated with a feature table which is operated on by the semantic interpretation. Node and feature handling is facilitated by a set of macros for adding, erasing, and checking features and copying, deleting, and inserting nodes.


2007 ◽  
Vol 363 (1493) ◽  
pp. 1037-1054 ◽  
Author(s):  
Lorraine K Tyler ◽  
William Marslen-Wilson

The research described here combines psycholinguistically well-motivated questions about different aspects of human language comprehension with behavioural and neuroimaging studies of normal performance, incorporating both subtractive analysis techniques and functional connectivity methods, and applying these tasks and techniques to the analysis of the functional and neural properties of brain-damaged patients with selective linguistic deficits in the relevant domains. The results of these investigations point to a set of partially dissociable sub-systems supporting three major aspects of spoken language comprehension, involving regular inflectional morphology, sentence-level syntactic analysis and sentence-level semantic interpretation. Differential patterns of fronto-temporal connectivity for these three domains confirm that the core aspects of language processing are carried out in a fronto-temporo-parietal language system which is modulated in different ways as a function of different linguistic processing requirements. No one region or sub-region holds the key to a specific language function; each requires the coordination of activity within a number of different regions. Functional connectivity analysis plays the critical role of indicating the regions which directly participate in a given sub-process, by virtue of their joint time-dependent activity. By revealing these codependencies, connectivity analysis sharpens the pattern of structure–function relations underlying specific aspects of language performance.


Author(s):  
Kiyoshi Akama ◽  
◽  
Ekawit Nantajeewarawat ◽  

In the equivalent transformation (ET) computation model, a specification provides background knowledge in a problem domain and defines a set of queries of interest. A program is a set of prioritized transformation rules, and computation consists in successive reduction of queries using meaning-preserving transformation with respect to given background knowledge. We present a formalization of the ET model from the viewpoint of program synthesis, where not only computation but also program correctness and correctness relations are of central importance. The notion of program correctness defines “what it means for a program to be correct with respect to a specification,” and a correctness relation provides guidance on “how to obtain such a program.” The correctness relation of the ET model is established, based on which how the basic structure of the ET model facilitates program synthesis is discussed together with program synthesis strategies in this model.


Sign in / Sign up

Export Citation Format

Share Document