scholarly journals Multilingual Knowledge Base Completion by Cross-lingual Semantic Relation Inference

Author(s):  
Nadia Bebeshina-Clairet ◽  
Mathieu Lafourcade
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shaofei Wang ◽  
Depeng Dang

PurposePrevious knowledge base question answering (KBQA) models only consider the monolingual scenario and cannot be directly extended to the cross-lingual scenario, in which the language of questions and that of knowledge base (KB) are different. Although a machine translation (MT) model can bridge the gap through translating questions to the language of KB, the noises of translated questions could accumulate and further sharply impair the final performance. Therefore, the authors propose a method to improve the robustness of KBQA models in the cross-lingual scenario.Design/methodology/approachThe authors propose a knowledge distillation-based robustness enhancement (KDRE) method. Specifically, first a monolingual model (teacher) is trained by ground truth (GT) data. Then to imitate the practical noises, a noise-generating model is designed to inject two types of noise into questions: general noise and translation-aware noise. Finally, the noisy questions are input into the student model. Meanwhile, the student model is jointly trained by GT data and distilled data, which are derived from the teacher when feeding GT questions.FindingsThe experimental results demonstrate that KDRE can improve the performance of models in the cross-lingual scenario. The performance of each module in KBQA model is improved by KDRE. The knowledge distillation (KD) and noise-generating model in the method can complementarily boost the robustness of models.Originality/valueThe authors first extend KBQA models from monolingual to cross-lingual scenario. Also, the authors first implement KD for KBQA to develop robust cross-lingual models.


Terminology ◽  
2002 ◽  
Vol 8 (1) ◽  
pp. 91-111 ◽  
Author(s):  
Caroline Barrière

This research looks at the complexity inherent in the causal relation and the implications for its representation in a Terminological Knowledge Base (TKB). Supported by a more general study of semantic relation hierarchies, a hierarchical refinement of the causal relation is proposed. It results from a manual search of a corpus which shows that it efficiently captures and formalizes variations expressed in text. The feasibility of determining such categorization during automatic extraction from corpora is also explored. Conceptual graphs are used as a representation formalism to which we have added certainty information to capture the degree of certainty surrounding the interaction between two terms involved in a causal relation.


Author(s):  
Muhao Chen ◽  
Yingtao Tian ◽  
Mohan Yang ◽  
Carlo Zaniolo

Many recent works have demonstrated the benefits of knowledge graph embeddings in completing monolingual knowledge graphs. Inasmuch as related knowledge bases are built in several different languages, achieving cross-lingual knowledge alignment will help people in constructing a coherent knowledge base, and assist machines in dealing with different expressions of entity relationships across diverse human languages. Unfortunately, achieving this highly desirable cross-lingual alignment by human labor is very costly and error-prone. Thus, we propose MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. We deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. Our models can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts. The experiments on cross-lingual entity matching and triple-wise alignment verification show promising results, with some variants consistently outperforming others on different tasks. We also explore how MTransE preserves the key properties of its monolingual counterpart.


2020 ◽  
Vol 34 (10) ◽  
pp. 13801-13802
Author(s):  
Jiale Han ◽  
Bo Cheng ◽  
Xu Wang

Graph convolutional networks (GCN) have been applied in knowledge base question answering (KBQA) task. However, the pairwise connection between nodes of GCN limits the representation capability of high-order data correlation. Furthermore, most previous work does not fully utilize the semantic relation information, which is vital to reasoning. In this paper, we propose a novel multi-hop KBQA model based on hypergraph convolutional network. By constructing a hypergraph, the form of pairwise connection between nodes and nodes is converted to the high-level connection between nodes and edges, which effectively encodes complex related data. To better exploit the semantic information of relations, we apply co-attention method to learn similarity between relation and query, and assign weights to different relations. Experimental results demonstrate the effectivity of the model.


2017 ◽  
Author(s):  
Patrick Klein ◽  
Simone Paolo Ponzetto ◽  
Goran Glavaš
Keyword(s):  

Author(s):  
Rizka Sholikah ◽  
◽  
Agus Arifin ◽  
Chastine Fatichah ◽  
Ayu Purwarianti ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document