multiple relation
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Zihe Liu ◽  
Weiying Hou ◽  
Jiayi Zhang ◽  
Chenyu Cao ◽  
Bin Wu

2021 ◽  
Vol 15 (2) ◽  
pp. 293-304
Author(s):  
Muhammad Masqotul Imam Romadlani

This research is conducted to reveal how humorous utterances are constructed by manipulating semantic meaning especially dealing with lexical semantics. Lexical semantics provides multiple meanings that portray their meanings’ relationship among a word and they are potentially exploited to elicit humor. This research examines utterances manipulating lexical semantics as the strategy of humor creation in Mind Your Language situation comedy. Applying a descriptive qualitative approach, the findings indicate that five types of lexical semantics are utilized as strategies of humorous utterances creation. Those types of lexical semantics are polysemy, homonymy, homophone, hyponymy, dan synonymy. Because of their multiple relation meanings, the speaker can refer to other meanings to construct different meanings with the hearer. The speaker constructs an incongruent meaning between what the hearer’s perception is and what the speaker meant. The deviation of lexical semantics between the hearer and the speaker completely illustrates the concept of incongruity theory of humor. 


2021 ◽  
Vol 15 ◽  
Author(s):  
Guiduo Duan ◽  
Jiayu Miao ◽  
Tianxi Huang ◽  
Wenlong Luo ◽  
Dekun Hu

Relation extraction is a popular subtask in natural language processing (NLP). In the task of entity relation joint extraction, overlapping entities and multi-type relation extraction in overlapping triplets remain a challenging problem. The classification of relations by sharing the same probability space will ignore the correlation information among multiple relations. A relational-adaptive entity relation joint extraction model based on multi-head self-attention and densely connected graph convolution network (which is called MA-DCGCN) is proposed in the paper. In the model, the multi-head attention mechanism is specifically used to assign weights to multiple relation types among entities so as to ensure that the probability space of multiple relation is not mutually exclusive. This mechanism also predicts the strength of the relationship between various relationship types and entity pairs flexibly. The structure information of deeper level in the text graph is extracted by the densely connected graph convolution network, and the interaction information of entity relation is captured. To demonstrate the superior performance of our model, we conducted a variety of experiments on two widely used public datasets, NYT and WebNLG. Extensive results show that our model achieves state-of-the-art performance. Especially, the detection effect of overlapping triplets is significantly improved compared with the several existing mainstream methods.


2021 ◽  
Vol 420 ◽  
pp. 162-170
Author(s):  
Heyan Huang ◽  
Ming Lei ◽  
Chong Feng

2020 ◽  
Vol 13 (3) ◽  
pp. 205979912096169
Author(s):  
Matthew F Dabkowski ◽  
Neng Fan ◽  
Ronald Breiger

From the outset, computational sociologists have stressed leveraging multiple relations when blockmodeling social networks. Despite this emphasis, the majority of published research over the past 40 years has focused on solving blockmodels for a single relation. When multiple relations exist, a reductionist approach is often employed, where the relations are stacked or aggregated into a single matrix, allowing the researcher to apply single relation, often heuristic, blockmodeling techniques. Accordingly, in this article, we develop an exact procedure for the exploratory blockmodeling of multiple relation, mixed-mode networks. In particular, given (a) [Formula: see text] actors, (b) [Formula: see text] events, (c) an [Formula: see text] binary one-mode network depicting the ties between actors, and (d) an [Formula: see text] binary two-mode network representing the ties between actors and events, we use integer programming to find globally optimal [Formula: see text] image matrices and partitions, where [Formula: see text] and [Formula: see text] represent the number of actor and event positions, respectively. Given the problem’s computational complexity, we also develop an algorithm to generate a minimal set of non-isomorphic image matrices, as well as a complementary, easily accessible heuristic using the network analysis software Pajek. We illustrate these concepts using a simple, hypothetical example, and we apply our techniques to a terrorist network.


2020 ◽  
Vol 34 (05) ◽  
pp. 8528-8535
Author(s):  
Tapas Nayak ◽  
Hwee Tou Ng

A relation tuple consists of two entities and the relation between them, and often such tuples are found in unstructured text. There may be multiple relation tuples present in a text and they may share one or both entities among them. Extracting such relation tuples from a sentence is a difficult task and sharing of entities or overlapping entities among the tuples makes it more challenging. Most prior work adopted a pipeline approach where entities were identified first followed by finding the relations among them, thus missing the interaction among the relation tuples in a sentence. In this paper, we propose two approaches to use encoder-decoder architecture for jointly extracting entities and relations. In the first approach, we propose a representation scheme for relation tuples which enables the decoder to generate one word at a time like machine translation models and still finds all the tuples present in a sentence with full entity names of different length and with overlapping entities. Next, we propose a pointer network-based decoding approach where an entire tuple is generated at every time step. Experiments on the publicly available New York Times corpus show that our proposed approaches outperform previous work and achieve significantly higher F1 scores.


Author(s):  
Xinyi Xu ◽  
Huanhuan Cao ◽  
Yanhua Yang ◽  
Erkun Yang ◽  
Cheng Deng

In this work, we tackle the zero-shot metric learning problem and propose a novel method abbreviated as ZSML, with the purpose to learn a distance metric that measures the similarity of unseen categories (even unseen datasets). ZSML achieves strong transferability by capturing multi-nonlinear yet continuous relation among data. It is motivated by two facts: 1) relations can be essentially described from various perspectives; and 2) traditional binary supervision is insufficient to represent continuous visual similarity. Specifically, we first reformulate a collection of specific-shaped convolutional kernels to combine data pairs and generate multiple relation vectors. Furthermore, we design a new cross-update regression loss to discover continuous similarity. Extensive experiments including intra-dataset transfer and inter-dataset transfer on four benchmark datasets demonstrate that ZSML can achieve state-of-the-art performance.


Sign in / Sign up

Export Citation Format

Share Document