scholarly journals Comparative Evaluation of Link-Based Approaches for Candidate Ranking in Link-to-Wikipedia Systems

2014 ◽  
Vol 49 ◽  
pp. 733-773 ◽  
Author(s):  
N. Fernandez Garcia ◽  
J. Arias Fisteus ◽  
L. Sanchez Fernandez

In recent years, the task of automatically linking pieces of text (anchors) mentioned in a document to Wikipedia articles that represent the meaning of these anchors has received extensive research attention. Typically, link-to-Wikipedia systems try to find a set of Wikipedia articles that are candidates to represent the meaning of the anchor and, later, rank these candidates to select the most appropriate one. In this ranking process the systems rely on context information obtained from the document where the anchor is mentioned and/or from Wikipedia. In this paper we center our attention in the use of Wikipedia links as context information. In particular, we offer a review of several candidate ranking approaches in the state-of-the-art that rely on Wikipedia link information. In addition, we provide a comparative empirical evaluation of the different approaches on five different corpora: the TAC 2010 corpus and four corpora built from actual Wikipedia articles and news items.

2021 ◽  
Author(s):  
Muhammad Shahroz Nadeem ◽  
Sibt Hussain ◽  
Fatih Kurugollu

This paper solves the textual deblurring problem, In this paper we propose a new loss function, we provide empirical evaluation of the design choices based on which a memory friendly CNN model is proposed, that performs better then the state of the art CNN method.


Author(s):  
Yanchen Deng ◽  
Ziyu Chen ◽  
Dingding Chen ◽  
Wenxin Zhang ◽  
Xingqiong Jiang

Asymmetric distributed constraint optimization problems (ADCOPs) are an emerging model for coordinating agents with personal preferences. However, the existing inference-based complete algorithms which use local eliminations cannot be applied to ADCOPs, as the parent agents are required to transfer their private functions to their children. Rather than disclosing private functions explicitly to facilitate local eliminations, we solve the problem by enforcing delayed eliminations and propose AsymDPOP, the first inference-based complete algorithm for ADCOPs. To solve the severe scalability problems incurred by delayed eliminations, we propose to reduce the memory consumption by propagating a set of smaller utility tables instead of a joint utility table, and to reduce the computation efforts by sequential optimizations instead of joint optimizations. The empirical evaluation indicates that AsymDPOP significantly outperforms the state-of-the-art, as well as the vanilla DPOP with PEAV formulation.


2008 ◽  
Vol 07 (04) ◽  
pp. C02
Author(s):  
Lynn Uyen Tran

Explainers have a longstanding presence in science museums and centres, and play a significant role in the institutions’ educational agenda. They interact with the public, and help make visitors’ experiences meaningful and memorable. Despite their valuable contributions, little research attention has been paid to the role and practice of these individuals. From the limited research literature that does exist, we know that museum educators employ a complexity of skills and knowledge. We also know such educators have a variety of experiences and qualifications – this creates a rich diversity within the field. Finally we know that the content and quality of programmes designed to educate novice explainers vary across institutions. Should we work toward a shared identity across institutions? Or even a “professionalization”? The paper explores the state of the art of the discussion around that questions.


2021 ◽  
Author(s):  
Muhammad Shahroz Nadeem ◽  
Sibt Hussain ◽  
Fatih Kurugollu

This paper solves the textual deblurring problem, In this paper we propose a new loss function, we provide empirical evaluation of the design choices based on which a memory friendly CNN model is proposed, that performs better then the state of the art CNN method.


2006 ◽  
Vol 84 (2) ◽  
pp. 175-194 ◽  
Author(s):  
Manuel Maldonado

The present work summarizes the progress attained in the study of sponge larval ecology since the state-of-the-art reviews performed in the 1970s and stresses the major weaknesses in our current understanding. Most available information on this subject comes from laboratory studies, with just occasional field observations or experiments. The data are also strongly biased because they are mostly derived from just one larval type out the eight types known in the phylum Porifera. Descriptive studies on larval histology are relatively abundant, but investigations directed at unravelling the cytological basis of the main larval behaviors are scarce. Most aspects of basic larval metabolism and sensing processes remain largely not investigated. Modelling of larval ecology is virtually lacking, with no serious attempt to investigate how the major features of larval ecology affect the structure and dynamics of sponge populations. In summary, the ecology of the sponge larva needs further research attention if we are to achieve a global understanding of the biology of the phylum Porifera.


Author(s):  
Steven Y. Liang ◽  
Rogelio L. Hecker ◽  
Robert G. Landers

Automation at the process level for machining operations and machine tools has been a focus of research attention in both academia and industry alike for several decades. Research in this area has carried strong expectations in the context of increased productivity, improved part quality, reduced costs, and relaxed part design constraints. The basis for these expectations is two-fold. First, machining process automation, if exercised strategically and advantageously, can perform consistently for large batch production or flexibly for small batch jobs. Secondly, process automation can be set up to autonomously tune the machine parameters (feed, speed, depth of cut, etc.) in pursuit of desirable performance (tolerance, finish, cycle time, etc.), thereby bridging the gap between product design and process planning while reaching beyond the human operators’ capability. The success of manufacturing process automation hinges primarily on the effectiveness of process monitoring and control systems. This paper reviews the evolution and the state of the art of machining process monitoring and control technologies. Key issues to be presented include sensor techniques, control techniques, hardware availability, and implementation examples. Also to be reviewed are the benefits of the systems and the reasons for their delayed realization in many of today’s industrial application domains.


2019 ◽  
Author(s):  
Gabriel O. Ramos ◽  
Ana L. C. Bazzan ◽  
Bruno C. Da Silva

Traffic congestions present a major challenge in large cities. Consid- ering the distributed, self-interested nature oftraffic we tackle congestions using multiagent reinforcement learning (MARL). In this thesis, we advance the state- of-the-art by delivering the first MARL convergence guarantees in congestion- like problems. We introduce an algorithm through which drivers can learn opti- mal routes by locally estimating the regret associated with their decisions, which we prove to converge to an equilibrium. In order to mitigate the effects ofselfish- ness, we also devise a decentralised tolling scheme, which we prove to minimise traffic congestion levels. Our theoretical results are supported by an extensive empirical evaluation on realistic traffic networks. 1.


2010 ◽  
Vol 39 ◽  
pp. 51-126 ◽  
Author(s):  
M. Katz ◽  
C. Domshlak

State-space search with explicit abstraction heuristics is at the state of the art of cost-optimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of (additive) implicit abstractions, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called fork-decomposition, that is based on two novel fragments of tractable cost-optimal planning. The induced admissible heuristics are then studied formally and empirically. This study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining h(s) for each evaluated state s by a very fast lookup in a ``database.'' By contrast, while fork-decomposition heuristics can be calculated in polynomial time, computing them is far from being fast. To address this problem, we show that the time-per-node complexity bottleneck of the fork-decomposition heuristics can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a ``database'' exists for the fork-decomposition abstractions as well, despite their exponential-size abstract spaces. We then verify empirically that heuristic search with the ``databased" fork-decomposition heuristics favorably competes with the state of the art of cost-optimal planning.


Sign in / Sign up

Export Citation Format

Share Document