scholarly journals To Phrase or Not to Phrase – Impact of User versus System Term Dependence upon Retrieval

2018 ◽  
Vol 2 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Christina Lioma ◽  
Birger Larsen ◽  
Peter Ingwersen

Abstract When submitting queries to information retrieval (IR) systems, users often have the option of specifying which, if any, of the query terms are heavily dependent on each other and should be treated as a fixed phrase, for instance by placing them between quotes.In addition to such cases where users specify term dependence, automatic ways also exist for IR systems to detect dependent terms in queries. Most IR systems use both user and algorithmic approaches. It is not however clear whether and to what extent user-defined term dependence agrees with algorithmic estimates of term dependence, nor which of the two may fetch higher performance gains. Simply put, is it better to trust users or the system to detect term dependence in queries? To answer this question, we experiment with 101 crowdsourced search engine users and 334 queries (52 train and 282 test TREC queries) and we record 10 assessments per query. We find that (i) user assessments of term dependence differ significantly from algorithmic assessments of term dependence (their overlap is approximately 30%); (ii) there is little agreement among users about term dependence in queries, and this disagreement increases as queries become longer; (iii) the potential retrieval gain that can be fetched by treating term dependence (both user- and system-defined) over a bag of words baseline is reserved to a small subset (approximately 8%) of the queries, and is much higher for low-depth than deep precision measures. Points (ii) and (iii) constitute novel insights into term dependence.

Author(s):  
Nobuyoshi Sato ◽  
Minoru Udagawa ◽  
Minoru Uehara ◽  
Yoshifumi Sakai ◽  
Hideki Mori

Author(s):  
Humberto Oliveira Serra ◽  
Lucas Bezerra Maia ◽  
Alexis Salomon ◽  
Nigel da Silva Lima ◽  
Rubem de Sousa Silva ◽  
...  

Author(s):  
Cecil Eng Huang Chua ◽  
Roger H. Chiang ◽  
Veda C. Storey

Search engines are ubiquitous tools for seeking information from the Internet and, as such, have become an integral part of our information society. New search engines that combine ideas from separate search engines generally outperform the search engines from which they took ideas. Designers, however, may not be aware of the work of other search engine developers or such work may not be available in modules that can be incorporated into another search engine. This research presents an interoperability architecture for building customized search engines. Existing search engines are analyzed and decomposed into self-contained components that are classified into six categories. A prototype, called the Automated Software Development Environment for Information Retrieval, was developed to implement the interoperability architecture, and an assessment of its feasibility was carried out. The prototype resolves conflicts between components of separate search engines and demonstrates how design features across search engines can be integrated.


2010 ◽  
pp. 652-668
Author(s):  
Charles Delalonde ◽  
Eddie Soulier

This research leverages information retrieval activity in order to build a network of organizational expertise in a distributed R&D laboratory. The authors describe traditional knowledge management practices and review post-cognitivists theories in order to define social creation in collaborative information retrieval activity. The Actor-Network theory accurately describes association processes and includes both human and non-human entities. This chapter compares this theory with the emergence of Social Search services online and Experts’ Retrieval Systems. The chapter authors suggest afterward, a social search engine named DemonD that identifies documents but more specifically users relevant to a query. DemonD relies on transparent profile construction based upon user activity, community participation, and shared documents. Individuals are invited to participate in a dedicated newsgroup and the information exchanged is capitalized. The evaluation of our service both ergonomic and through a simulation provides encouraging data.


2018 ◽  
Vol 7 (3.3) ◽  
pp. 119
Author(s):  
B Lokesh ◽  
Ravoori Charishma ◽  
Natuva Hiranmai

Farmers face a multitude of problems nowadays such as lower crop production, tumultuous weather patterns, and crop infections. All of these issues can be solved if they have access to the right information. The current methods of information retrieval, such as search engine lookup and talking to an Agriculture Officer, have multiple defects. A more suitable solution, that we are proposing, is an android application, available at all times, that can give succinct answers to any question a farmer may pose. The application will include an image recognition component that will be able to recognize a variety of crop diseases in the case that the farmer does not know what he is dealing with and is unable to describe it.  Image recognition is the ability of a computer to recognize and distinguish between different objects, and is actually a much harder problem to solve than it seems. We are using Tensorflow, a tool that uses convolutional neural networks, to implement it  


2011 ◽  
pp. 74-100
Author(s):  
Eliana Campi ◽  
Gianluca Lorenzo

This chapter presents technologies and approaches for information retrieval in a knowledge base. We intend to show that the use of ontology for domain representation and knowledge search offers a more efficient approach for knowledge management. This approach focuses on the meaning of the word, thus becoming an important element in the building of the Semantic Web. The search based on both keywords and ontology allows more effective information retrieval exploiting the Semantic of the information in a variety of data. We present a method for taxonomy building, annotating, and searching documents with taxonomy concepts. We also describe our experience in the creation of an informal taxonomy, the automatic classification, and the validation of search results with traditional measures, such as precision, recall and f-measure.


Author(s):  
Ji-Rong Wen

Web query log is a type of file keeping track of the activities of the users who are utilizing a search engine. Compared to traditional information retrieval setting in which documents are the only information source available, query logs are an additional information source in the Web search setting. Based on query logs, a set of Web mining techniques, such as log-based query clustering, log-based query expansion, collaborative filtering and personalized search, could be employed to improve the performance of Web search.


2014 ◽  
Vol 926-930 ◽  
pp. 2263-2266
Author(s):  
Li Juan Diao ◽  
Jun Zhong Gu ◽  
Liang Chun

Ontology definition metamodel has been widely adopted in aspect of building ontology. However existing ontology metamodel is only suitable for building ontology in a certain domain. With collaboration and sharing among multiple domains, we face the seriously problem that is how to overcome semantic interoperability. For this problem, we need to combine general ontology with domain ontology and merge all existing ontologies by ontology metamodel. In this paper, we define main components of ontology metamodel and present conditional context and contextual concept unit. In addition, we introduce the method of mapping between conditional context and contextual concept unit. Finally, we use an example about information retrieval to illustrate its function and analysis its feasibility.


2014 ◽  
Vol 23 (04) ◽  
pp. 1460014 ◽  
Author(s):  
Georgios Stratogiannis ◽  
Georgios Siolas ◽  
Andreas Stafylopatis

We describe a system that performs semantic Question Answering based on the combination of classic Information Retrieval methods with semantic ones. First, we use a search engine to gather web pages and then apply a noun phrase extractor to extract all the candidate answer entities from them. Candidate entities are ranked using a linear combination of two IR measures to pick the most relevant ones. For each one of the top ranked candidate entities we find the corresponding Wikipedia page. We then propose a novel way to exploit Semantic Information contained in the structure of Wikipedia. A vector is built for every entity from Wikipedia category names by splitting and lemmatizing the words that form them. These vectors maintain Semantic Information in the sense that we are given the ability to measure semantic closeness between the entities. Based on this, we apply an intelligent clustering method to the candidate entities and show that candidate entities in the biggest cluster are the most semantically related to the ideal answers to the query. Results on the topics of the TREC 2009 Related Entity Finding task dataset show promising performance.


Sign in / Sign up

Export Citation Format

Share Document