cluster ranking
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 2)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 19 ◽  
pp. 2269-2278
Author(s):  
Shahabeddin Sotudian ◽  
Israel T. Desta ◽  
Nasser Hashemi ◽  
Shahrooz Zarbafian ◽  
Dima Kozakov ◽  
...  

2019 ◽  
Vol 21 (51) ◽  
pp. 313 ◽  
Author(s):  
Anca Bģndoi ◽  
◽  
Claudiu Bocean ◽  
Aurelia Florea ◽  
Dalia Simion ◽  
...  

2011 ◽  
Vol 41 ◽  
pp. 367-395 ◽  
Author(s):  
O. Kurland ◽  
E. Krikon

Exploiting information induced from (query-specific) clustering of top-retrieved documents has long been proposed as a means for improving precision at the very top ranks of the returned results. We present a novel language model approach to ranking query-specific clusters by the presumed percentage of relevant documents that they contain. While most previous cluster ranking approaches focus on the cluster as a whole, our model utilizes also information induced from documents associated with the cluster. Our model substantially outperforms previous approaches for identifying clusters containing a high relevant-document percentage. Furthermore, using the model to produce document ranking yields precision-at-top-ranks performance that is consistently better than that of the initial ranking upon which clustering is performed. The performance also favorably compares with that of a state-of-the-art pseudo-feedback-based retrieval method.


2011 ◽  
Vol 40 ◽  
pp. 469-521 ◽  
Author(s):  
A. Rahman ◽  
V. Ng

Traditional learning-based coreference resolvers operate by training the mention-pair model for determining whether two mentions are coreferent or not. Though conceptually simple and easy to understand, the mention-pair model is linguistically rather unappealing and lags far behind the heuristic-based coreference models proposed in the pre-statistical NLP era in terms of sophistication. Two independent lines of recent research have attempted to improve the mention-pair model, one by acquiring the mention-ranking model to rank preceding mentions for a given anaphor, and the other by training the entity-mention model to determine whether a preceding cluster is coreferent with a given mention. We propose a cluster-ranking approach to coreference resolution, which combines the strengths of the mention-ranking model and the entity-mention model, and is therefore theoretically more appealing than both of these models. In addition, we seek to improve cluster rankers via two extensions: (1) lexicalization and (2) incorporating knowledge of anaphoricity by jointly modeling anaphoricity determination and coreference resolution. Experimental results on the ACE data sets demonstrate the superior performance of cluster rankers to competing approaches as well as the effectiveness of our two extensions.


Sign in / Sign up

Export Citation Format

Share Document