recognizing textual entailment
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 3)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
Vol 189 ◽  
pp. 148-155
Author(s):  
Rani Aulia Hidayat ◽  
Isnaini Nurul Khasanah ◽  
Wava Carissa Putri ◽  
Rahmad Mahendra

Author(s):  
Tuhin Chakrabarty ◽  
Debanjan Ghosh ◽  
Adam Poliak ◽  
Smaranda Muresan

Author(s):  
Rohini Basak ◽  
Sudip Kumar Naskar ◽  
Alexander Gelbukh

Given two textual fragments, called a text and a hypothesis, respectively, recognizing textual entailment (RTE) is a task of automatically deciding whether the meaning of the second fragment (hypothesis) logically follows from the meaning of the first fragment (text). The chapter presents a method for RTE based on lexical similarity, dependency relations, and semantic similarity. In this method, called LSS-RTE, each of the two fragments is converted to a dependency graph, and the two obtained graph structures are compared using dependency triple matching rules, which have been compiled after a thorough and detailed analysis of various RTE development datasets. Experimental results show 60.5%, 64.4%, 62.8%, and 61.5% accuracy on the well-known RTE1, RTE2, RTE3, and RTE4 datasets, respectively, for the two-way classification task and 54.3% accuracy for three-way classification task on the RTE4 dataset.


2019 ◽  
Vol 7 ◽  
pp. 677-694
Author(s):  
Ellie Pavlick ◽  
Tom Kwiatkowski

We analyze human’s disagreements about the validity of natural language inferences. We show that, very often, disagreements are not dismissible as annotation “noise”, but rather persist as we collect more ratings and as we vary the amount of context provided to raters. We further show that the type of uncertainty captured by current state-of-the-art models for natural language inference is not reflective of the type of uncertainty present in human disagreements. We discuss implications of our results in relation to the recognizing textual entailment (RTE)/natural language inference (NLI) task. We argue for a refined evaluation objective that requires models to explicitly capture the full distribution of plausible human judgments.


Author(s):  
Masashi Yoshikawa ◽  
Koji Mineshima ◽  
Hiroshi Noji ◽  
Daisuke Bekki

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.


Author(s):  
Vivian S. Silva ◽  
André Freitas ◽  
Siegfried Handschuh

Recognizing textual entailment is a key task for many semantic applications, such as Question Answering, Text Summarization, and Information Extraction, among others. Entailment scenarios can range from a simple syntactic variation to more complex semantic relationships between pieces of text, but most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. We propose a composite approach for recognizing text entailment which analyzes the entailment pair to decide whether it must be resolved syntactically or semantically. We also make the answer interpretable: whenever an entailment is solved semantically, we explore a knowledge base composed of structured lexical definitions to generate natural language humanlike justifications, explaining the semantic relationship holding between the pieces of text. Besides outperforming wellestablished entailment algorithms, our composite approach gives an important step towards Explainable AI, using world knowledge to make the semantic reasoning process explicit and understandable.


Sign in / Sign up

Export Citation Format

Share Document