scholarly journals Efficient Evaluation and Learning in Multilevel Parallel Constraint Grammars

2017 ◽  
Vol 48 (3) ◽  
pp. 349-388
Author(s):  
Paul Boersma ◽  
Jan-Willem van Leussen

In multilevel parallel Optimality Theory grammars, the number of candidates (possible paths from the input to the output level) increases exponentially with the number of levels of representation. The problem with this is that with the customary strategy of listing all candidates in a tableau, the computation time for evaluation (i.e., choosing the winning candidate) and learning (i.e., reranking the constraints on the basis of language data) increases exponentially with the number of levels as well. This article proposes instead to collect the candidates in a graph in which the number of nodes and the number of connections increase only linearly with the number of levels of representation. As a result, there exist procedures for evaluation and learning that increase only linearly with the number of levels. These efficient procedures help to make multilevel parallel constraint grammars more feasible as models of human language processing. We illustrate visualization, evaluation, and learning with a toy grammar for a traditional case that has already previously been analyzed in terms of parallel evaluation, namely, French liaison.

Author(s):  
Diana McCarthy

Natural language processing is the study of computer programs that can understand and produce human language. An important goal in the research to produce such technology is identifying the right meaning of words and phrases. In this paper, we give an overview of current research in three areas: (i) inducing word meaning; (ii) distinguishing different meanings of words used in context; and (iii) determining when the meaning of a phrase cannot straightforwardly be obtained from its parts. Manual construction of resources is labour intensive and costly and furthermore may not reflect the meanings that are useful for the task or data at hand. For this reason, we focus particularly on systems that use samples of language data to learn about meanings, rather than examples annotated by humans.


Author(s):  
Shreyashi Chowdhury ◽  
Asoke Nath

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyse large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.NLP combines computational linguistics—rule-based modelling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. This paper discusses on the various scope and challenges , current trends and future scopes of Natural Language Processing.


1985 ◽  
Vol 30 (7) ◽  
pp. 529-531
Author(s):  
Patrick Carroll

2021 ◽  
Vol 21 (2) ◽  
pp. 1-25
Author(s):  
Pin Ni ◽  
Yuming Li ◽  
Gangmin Li ◽  
Victor Chang

Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.


2013 ◽  
Vol 30 ◽  
pp. 188-200 ◽  
Author(s):  
Jeroen van de Weijer ◽  
Marjoleine Sloos

This paper questions the assumption made in classic Optimality Theory (Prince & Smolensky 1993 [2004]) that markedness constraints are an innate part of Universal Grammar. Instead, we argue that constraints are acquired on the basis of the language data to which L1 learning children are exposed. This is argued both on general grounds (innateness is an assumption that should not be invoked lightly) and on the basis of empirical evidence. We investigate this issue for six general markedness constraints in French, and show that all constraints could be acquired on the basis of the ambient data. Second, we show that the order of acquisition of the marked structures matches the frequency of violations of the relevant constraints in the input quite well. This argues in favour of a phonological model in which constraints are acquired, not innate, i.e. a model in which grammatical notions such as constraints are derived from language use.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 196-210
Author(s):  
Dr.P. Golda Jeyasheeli ◽  
N. Indumathi

Nowadays the interaction among deaf and mute people and normal people is difficult, because normal people scuffle to understand the sense of the gestures. The deaf and dumb people find problem in sentence formation and grammatical correction. To alleviate the issues faced by these people, an automatic sign language sentence generation approach is propounded. In this project, Natural Language Processing (NLP) based methods are used. NLP is a powerful tool for translation in the human language and also responsible for the formation of meaningful sentences from sign language symbols which is also understood by the normal person. In this system, both conventional NLP methods and Deep learning NLP methods are used for sentence generation. The efficiency of both the methods are compared. The generated sentence is displayed in the android application as an output. This system aims to connect the gap in the interaction among the deaf and dumb people and the normal people.


2003 ◽  
Vol 19 (3) ◽  
pp. 225-250 ◽  
Author(s):  
Linda Lombardi

Substitutions for English interdentals tend to be consistent based on first language (L1): eg. [t] for speakers of Russian, [s] for speakers of Japanese. While the facts suggest that some type of L1 transfer must be involved, a rule affecting a sound that does not occur in L1 is unlearnable. Optimality Theory (OT) allows a solution to this conundrum because the grammars contain independently necessary constraint rankings that also have an effect on the interdentals. [t] substitution results from high ranked markedness. This can be seen as an effect of universals because this grammar retains the original ranking that the L1 learners begins with. [s] substitution results from high ranked Faithfulness. In this case, some L1 phonology has forced reranking, making this an effect of L1 transfer.


Sign in / Sign up

Export Citation Format

Share Document