On syntactic categories

A traditional concern of grammarians has been the question of whether the members of given pairs of expressions belong to the same or different syntactic categories. Consider the following example sentences. ( a ) I think Fido destroyed the kennel . ( b ) The kennel, I think Fido destroyed . Are the two underlined expressions members of the same syntactic category or not? The generative grammarians of the last quarter century have, almost without exception, taken the answer to be affirmative. In the present paper I explore the implications of taking the answer to be negative. The changes consequent upon this negative answer turn out to be very far-reaching: (i) it becomes as simple to state rules for constructions of the general type exemplified in ( b ) as it is for the canonical NP VP construction in ( a ); (ii) we immediately derive an explanation for a range of coordination facts that have remained quite mysterious since they were discovered by J. R. Ross some 15 years ago; (iii) our grammars can entirely dispense with the class of rules known as transformations; (iv) our grammars can be shown to be formally equivalent to what are known as the context-free phrase structure grammars; (v) this latter consequence has the effect of making potentially relevant to natural language grammars a whole literature of mathematical results on the parsability and learnability of context-free phrase structure grammars.

2021 ◽  
Vol 3 (2) ◽  
pp. 215-244
Author(s):  
Diego Gabriel Krivochen

Abstract Proof-theoretic models of grammar are based on the view that an explicit characterization of a language comes in the form of the recursive enumeration of strings in that language. That recursive enumeration is carried out by a procedure which strongly generates a set of structural descriptions Σ and weakly generates a set of strings S; a grammar is thus a function that pairs an element of Σ with elements of S. Structural descriptions are obtained by means of Context-Free phrase structure rules or via recursive combinatorics and structure is assumed to be uniform: binary branching trees all the way down. In this work we will analyse natural language constructions for which such a rigid conception of phrase structure is descriptively inadequate and propose a solution for the problem of phrase structure grammars assigning too much or too little structure to natural language strings: we propose that the grammar can oscillate between levels of computational complexity in local domains, which correspond to elementary trees in a lexicalised Tree Adjoining Grammar.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 502
Author(s):  
Stefan Wagenpfeil ◽  
Paul Mc Kevitt ◽  
Matthias Hemmje

Multimedia feature graphs are employed to represent features of images, video, audio, or text. Various techniques exist to extract such features from multimedia objects. In this paper, we describe the extension of such a feature graph to represent the meaning of such multimedia features and introduce a formal context-free PS-grammar (Phrase Structure grammar) to automatically generate human-understandable natural language expressions based on such features. To achieve this, we define a semantic extension to syntactic multimedia feature graphs and introduce a set of production rules for phrases of natural language English expressions. This explainability, which is founded on a semantic model provides the opportunity to represent any multimedia feature in a human-readable and human-understandable form, which largely closes the gap between the technical representation of such features and their semantics. We show how this explainability can be formally defined and demonstrate the corresponding implementation based on our generic multimedia analysis framework. Furthermore, we show how this semantic extension can be employed to increase the effectiveness in precision and recall experiments.


Author(s):  
John Carroll

This chapter introduces key concepts and techniques for natural-language parsing: that is, finding the grammatical structure of sentences. The chapter introduces the fundamental algorithms for parsing with context-free (CF) phrase structure grammars, how these deal with ambiguous grammars, and how CF grammars and associated disambiguation models can be derived from syntactically annotated text. It goes on to consider dependency analysis, and outlines the main approaches to dependency parsing based both on manually written grammars and on learning from text annotated with dependency structures. It finishes with an overview of techniques used for parsing with grammars that use feature structures to encode linguistic information.


Author(s):  
Alfonso Ortega ◽  
Emilio del Rosal ◽  
Diana Pérez ◽  
Robert Mercaş ◽  
Alexander Perekrestenko ◽  
...  

Author(s):  
Paolo Santorio

On a traditional view, the semantics of natural language makes essential use of a context parameter, i.e. a set of coordinates that representss the situation of speech. In classical frameworks, this parameter plays two roles: it contributes to determining the content of utterances and it is used to define logical consequence. This paper argues that recent empirical proposals about context shift in natural language, which are supported by an increasing body of cross-linguistic data, are incompatible with this traditional view. The moral is that context has no place in semantic theory proper. We should revert back to so-called multiple-indexing frameworks that were developed by Montague and others, and relegate context to the postsemantic stage of a theory of meaning.


Author(s):  
John Carroll

This article introduces the concepts and techniques for natural language (NL) parsing, which signifies, using a grammar to assign a syntactic analysis to a string of words, a lattice of word hypotheses output by a speech recognizer or similar. The level of detail required depends on the language processing task being performed and the particular approach to the task that is being pursued. This article further describes approaches that produce ‘shallow’ analyses. It also outlines approaches to parsing that analyse the input in terms of labelled dependencies between words. Producing hierarchical phrase structure requires grammars that have at least context-free (CF) power. CF algorithms that are widely used in parsing of NL are described in this article. To support detailed semantic interpretation more powerful grammar formalisms are required, but these are usually parsed using extensions of CF parsing algorithms. Furthermore, this article describes unification-based parsing. Finally, it discusses three important issues that have to be tackled in real-world applications of parsing: evaluation of parser accuracy, parser efficiency, and measurement of grammar/parser coverage.


1996 ◽  
Vol 23 (2) ◽  
pp. 369-395 ◽  
Author(s):  
Julian M. Pine ◽  
Helen Martindale

ABSTRACTThere has been a growing trend in recent years towards the attribution of adult-like syntactic categories to young language-learning children. This is based, at least in part, on studies which claim to have found positive evidence for syntactic phrase structure categories in young children's speech. However, these claims contradict the findings of previous research which suggest that the categories underlying children's early multi-word speech are much more limited in scope. The present study represents an attempt to reconcile the findings of these different lines of research by focusing specifically on Valian's (1986) criteria for attributing the syntactic category of determiner to young children. The aim is, firstly, to replicate Valian's results regarding her determiner criteria on a new sample of seven children between the ages of 1;20 and 2;6; secondly, to investigate the extent to which children show overlap in the contexts in which they use different determiner types; and, thirdly, to compare this with a controlled measure of the overlap shown by competent adult speakers. The results suggest that Valian's criteria for attributing a syntactic determiner category are too generous and could be passed by children with a relatively small amount of limited scope knowledge. They also provide at least some evidence that a limited scope formula account of children's early determiner use may fit the data better than an adult-like syntactic account.


2008 ◽  
Vol 19 (03) ◽  
pp. 597-615 ◽  
Author(s):  
ARTUR JEŻ

Conjunctive grammars, introduced by Okhotin, extend context-free grammars by an additional operation of intersection in the body of any production of the grammar. Several theorems and algorithms for context-free grammars generalize to the conjunctive case. Okhotin posed nine open problems concerning those grammars. One of them was a question, whether a conjunctive grammars over a unary alphabet generate only regular languages. We give a negative answer, contrary to the conjectured positive one, by constructing a conjunctive grammar for the language {a4n : n ∈ ℕ}. We also generalize this result: for every set of natural numbers L we show that {an : n ∈ L} is a conjunctive unary language, whenever the set of representations in base-k system of elements of L is regular, for arbitrary k.


2020 ◽  
Vol 44 (1) ◽  
pp. 95-131
Author(s):  
Diego Gabriel Krivochen ◽  
Ľudmila Lacková

Abstract Linguistic iconicity has been studied since ancient times (e.g., Plato’s Cratylus, see Cooper & Hutchinson 1997). Within modern grammatical description, this notion was mostly developed by Jakobson and Benveniste; nowadays, iconicity in language is even being experimentally tested (e.g., Blasi et al. 2016; Diatka & Milička 2017). However, most studies on linguistic iconicity pertain to prosody, sound symbolism, or morphology; syntactic iconicity has been vastly underexplored. In this paper, we present two hypotheses concerning syntactic iconicity: (1) syntactic descriptions of natural language strings have an inherent structure which is isomorphic to that of representations in some other component of grammar or a non-grammatical system; or (2) linear order imposed on phrase structure is isomorphic to that in some other component of grammar or a non-grammatical system. We will argue in favour of the former, which constitutes a novel perspective on iconicity in grammar. We furthermore discuss the place that iconicity may have in the architecture of a generative system.


Sign in / Sign up

Export Citation Format

Share Document