grammar formalisms
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 2)

H-INDEX

6
(FIVE YEARS 0)

Author(s):  
Alexander Kraas

AbstractIn the light of standardization, the model-driven engineering (MDE) is becoming increasingly important for the development of DSLs, in addition to traditional approaches based on grammar formalisms. Metamodels define the abstract syntax and static semantics of a DSL and can be created by using the language concepts of the Meta Object Facility (MOF) or by defining a UML profile.Both metamodels and UML profiles are often provided for standardized DSLs, and the mappings of metamodels to UML profiles are usually specified informally in natural language, which also applies for the static semantics of metamodels and/or UML profiles, which has the disadvantage that ambiguities can occur, and that the static semantics must be manually translated into a machine-processable language.To address these weaknesses, we propose a new automated approach for deriving a UML profile from the metamodel of a DSL. One novelty is that subsetting or redefining metaclass attributes are mapped to stereotype attributes whose values are computed at runtime via automatically created OCL expressions. The automatic transfer of the static semantics of a DSL to a UML profile is a further contribution of our approach. Our DSL Metamodeling and Derivation Toolchain (DSL-MeDeTo) implements all aspects of our proposed approach in Eclipse. This enabled us to successfully apply our approach to the two DSLs Test Description Language (TDL) and Specification and Description Language (SDL).


2021 ◽  
Vol 9 ◽  
pp. 707-720
Author(s):  
Lena Katharina Schiffer ◽  
Andreas Maletti

Tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG) are two well-established mildly context-sensitive grammar formalisms that are known to have the same expressive power on strings (i.e., generate the same class of string languages). It is demonstrated that their expressive power on trees also essentially coincides. In fact, CCG without lexicon entries for the empty string and only first-order rules of degree at most 2 are sufficient for its full expressive power.


Author(s):  
Jieun Kiaer

Abstract This paper shows Korean speakers’ strong preference for incremental structure building based on the following core phenomena: (1) left–right asymmetry; (2) pre-verbal structure building and a strong preference for early association. This paper argues that these phenomena reflect the procedural aspects of linguistic competence, which are difficult to explain within non-procedural grammar formalisms. Based on these observations, I argue for the necessity of a grammar formalism that adopts left-to-right incrementality as a core property of the syntactic architecture. In particular, I aim to show the role of (1) constructive particles; (2) prosody; and (3) structural routines in incremental Korean structure building. Though the nature of this discussion is theory-neutral, in order to formalise this idea I will adopt Dynamic Syntax [DS: Kempson et al. (Dynamic syntax: the flow of language understanding, Blackwell, Oxford, 2001); Cann et al. (The dynamics of language. Elsevier, Oxford, 2005)] in this paper.


Linguistics ◽  
2019 ◽  
Author(s):  
Glyn Morrill

The term “categorial grammar” refers to a variety of approaches to syntax and semantics in which expressions are categorized by recursively defined types and in which grammatical structure is the projection of the properties of the lexical types of words. In the earliest forms of categorical grammar types are functional/implicational and interact by the logical rule of Modus Ponens. In categorial grammar there are two traditions: the logical tradition that grew out of the work of Joachim Lambek, and the combinatory tradition associated with the work of Mark Steedman. The logical approach employs methods from mathematical logic and situates categorial grammars in the context of substructural logic. The combinatory approach emphasizes practical applicability to natural language processing and situates categorial grammars within extended rewriting systems. The logical tradition interprets the history of categorial grammar as comprising evolution and generalization of basic functional/implicational types into a rich categorial logic suited to the characterization of the syntax and semantics of natural language which is at once logical, formal, computational, and mathematical, reaching a level of formal explicitness not achieved in other grammar formalisms. This is the interpretation of the field that is being made in this article. This research has been partially supported by MINICO project TIN2017–89244-R. Thanks to Stepan Kuznetsov, Oriol Valentín and Sylvain Salvati for comments and suggestions. All errors and shortcomings are the author’s own.


2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Dag Haug

Syntactic discontinuities are very frequent in classical Latin and yet this data was never considered in debates on how expressive grammar formalisms need to be to capture natural languages. In this paper I show with treebank data that Latin frequently displays syntactic discontinuities that cannot be captured in standard mildly context-sensitive frameworks such as Tree-Adjoining Grammars or Combinatory Categorial Grammars. I then argue that there is no principled bound on Latin discontinuities but that they display a broadly Zipfian distribution where frequency drops quickly for the more complex patterns. Lexical-Functional Grammar can capture these  discontinuities in a way that closely reflects their complexity and frequency distributions.


Author(s):  
Damir Ćavar ◽  
Lwin Moe ◽  
Hai Hu ◽  
Kenneth Steimel

The Free Linguistic Environment (FLE) project focuses on the development of an open and free library of natural language processing functions and a grammar engineering platform for Lexical Functional Grammar (LFG) and related grammar frameworks. In its present state the code-base of FLE contains basic essential elements for LFG-parsing. It uses finite-state-based morphological analyzers and syntactic unification parsers to generate parse-trees and related functional representations for input sentences based on a grammar. It can process a variety of grammar formalisms, which can be used independently or serve as backbones for the LFG parser. Among the supported formalisms are Context-free Grammars (CFG), Probabilistic Contextfree Grammars (PCFG), and all formal grammar components of the XLEgrammar formalism. The current implementation of the LFG-parser includes the possibility to use a PCFG backbone to model probabilistic c-structures. It also includes f-structure representations that allow for the specification or calculation of probabilities for complete f-structure representations, as well as for sub-paths in f-structure trees. Given these design features, FLE enables various forms of probabilistic modeling of c-structures and f-structures for input or output sentences that go beyond the capabilities of other technologies based on the LFG framework.


Author(s):  
Siva Reddy ◽  
Oscar Täckström ◽  
Michael Collins ◽  
Tom Kwiatkowski ◽  
Dipanjan Das ◽  
...  

The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast—partly due to the lack of a strong type system—dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and Web-Questions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.


2016 ◽  
Vol 42 (3) ◽  
pp. 353-389 ◽  
Author(s):  
Xun Zhang ◽  
Yantao Du ◽  
Weiwei Sun ◽  
Xiaojun Wan

Derivations under different grammar formalisms allow extraction of various dependency structures. Particularly, bilexical deep dependency structures beyond surface tree representation can be derived from linguistic analysis grounded by CCG, LFG, and HPSG. Traditionally, these dependency structures are obtained as a by-product of grammar-guided parsers. In this article, we study the alternative data-driven, transition-based approach, which has achieved great success for tree parsing, to build general dependency graphs. We integrate existing tree parsing techniques and present two new transition systems that can generate arbitrary directed graphs in an incremental manner. Statistical parsers that are competitive in both accuracy and efficiency can be built upon these transition systems. Furthermore, the heterogeneous design of transition systems yields diversity of the corresponding parsing models and thus greatly benefits parser ensemble. Concerning the disambiguation problem, we introduce two new techniques, namely, transition combination and tree approximation, to improve parsing quality. Transition combination makes every action performed by a parser significantly change configurations. Therefore, more distinct features can be extracted for statistical disambiguation. With the same goal of extracting informative features, tree approximation induces tree backbones from dependency graphs and re-uses tree parsing techniques to produce tree-related features. We conduct experiments on CCG-grounded functor–argument analysis, LFG-grounded grammatical relation analysis, and HPSG-grounded semantic dependency analysis for English and Chinese. Experiments demonstrate that data-driven models with appropriate transition systems can produce high-quality deep dependency analysis, comparable to more complex grammar-driven models. Experiments also indicate the effectiveness of the heterogeneous design of transition systems for parser ensemble, transition combination, as well as tree approximation for statistical disambiguation.


2016 ◽  
Author(s):  
Andrew Lamont ◽  
Jonathan Washington
Keyword(s):  

2015 ◽  
Vol 41 (3) ◽  
pp. 503-538 ◽  
Author(s):  
Yue Zhang ◽  
Stephen Clark

Word ordering is a fundamental problem in text generation. In this article, we study word ordering using a syntax-based approach and a discriminative model. Two grammar formalisms are considered: Combinatory Categorial Grammar (CCG) and dependency grammar. Given the search for a likely string and syntactic analysis, the search space is massive, making discriminative training challenging. We develop a learning-guided search framework, based on best-first search, and investigate several alternative training algorithms. The framework we present is flexible in that it allows constraints to be imposed on output word orders. To demonstrate this flexibility, a variety of input conditions are considered. First, we investigate a “pure” word-ordering task in which the input is a multi-set of words, and the task is to order them into a grammatical and fluent sentence. This task has been tackled previously, and we report improved performance over existing systems on a standard Wall Street Journal test set. Second, we tackle the same reordering problem, but with a variety of input conditions, from the bare case with no dependencies or POS tags specified, to the extreme case where all POS tags and unordered, unlabeled dependencies are provided as input (and various conditions in between). When applied to the NLG 2011 shared task, our system gives competitive results compared with the best-performing systems, which provide a further demonstration of the practical utility of our system.


Sign in / Sign up

Export Citation Format

Share Document