Transitivity alternations in Yucatec, and the correlation between aspect and argument roles

Linguistics ◽  
1999 ◽  
Vol 37 (3) ◽  
Author(s):  
Martin Krämer ◽  
Dieter Wunderlich
1994 ◽  
Vol 17 (2) ◽  
pp. 141-154 ◽  
Author(s):  
Konstantin I. Kazenin

This article gives a cognitively based account of polysemous transitivity alternations, which are Agent-preserving with some verbs and Object-preserving with others. The data from three languages – Asiatic Eskimo, Boumaa Fijian and Bambara – are presented. It is argued that the mechanism of the distribution of the meanings of these TAs is semantic in nature and does not depend upon the coding technique used by a language.


Author(s):  
Trung Minh Nguyen ◽  
Thien Huu Nguyen

The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.


Author(s):  
Artemis Alexiadou ◽  
Elena Anagnostopoulou ◽  
Florian Schäfer

Author(s):  
Junchi Zhang ◽  
Yanxia Qin ◽  
Yue Zhang ◽  
Mengchi Liu ◽  
Donghong Ji

The task of event extraction contains subtasks including detections for entity mentions, event triggers and argument roles. Traditional methods solve them as a pipeline, which does not make use of task correlation for their mutual benefits. There have been recent efforts towards building a joint model for all tasks. However, due to technical challenges, there has not been work predicting the joint output structure as a single task. We build a first model to this end using a neural transition-based framework, incrementally predicting complex joint structures in a state-transition process. Results on standard benchmarks show the benefits of the joint model, which gives the best result in the literature.


Sign in / Sign up

Export Citation Format

Share Document