scholarly journals Reinforcement Learning Over Knowledge Graphs for Explainable Dialogue Intent Mining

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 85348-85358 ◽  
Author(s):  
Kai Yang ◽  
Xinyu Kong ◽  
Yafang Wang ◽  
Jie Zhang ◽  
Gerard De Melo
2021 ◽  
Vol 103 ◽  
pp. 107144
Author(s):  
Luyi Bai ◽  
Wenting Yu ◽  
Mingzhuo Chen ◽  
Xiangnan Ma

Author(s):  
Guojia Wan ◽  
Shirui Pan ◽  
Chen Gong ◽  
Chuan Zhou ◽  
Gholamreza Haffari

Knowledge Graphs typically suffer from incompleteness. A popular approach to knowledge graph completion is to infer missing knowledge by multihop reasoning over the information found along other paths connecting a pair of entities. However, multi-hop reasoning is still challenging because the reasoning process usually experiences multiple semantic issue that a relation or an entity has multiple meanings. In order to deal with the situation, we propose a novel Hierarchical Reinforcement Learning framework to learn chains of reasoning from a Knowledge Graph automatically. Our framework is inspired by the hierarchical structure through which human handle cognitionally ambiguous cases. The whole reasoning process is decomposed into a hierarchy of two-level Reinforcement Learning policies for encoding historical information and learning structured action space. As a consequence, it is more feasible and natural for dealing with the multiple semantic issue. Experimental results show that our proposed model achieves substantial improvements in ambiguous relation tasks.


2021 ◽  
Author(s):  
Jinzhi Liao ◽  
Xiang Zhao ◽  
Jiuyang Tang ◽  
Weixin Zeng ◽  
Zhen Tan

AbstractWith the proliferation of large-scale knowledge graphs (KGs), multi-hop knowledge graph reasoning has been a capstone that enables machines to be able to handle intelligent tasks, especially where some explicit reasoning path is appreciated for decision making. To train a KG reasoner, supervised learning-based methods suffer from false-negative issues, i.e., unseen paths during training are not to be found in prediction; in contrast, reinforcement learning (RL)-based methods do not require labeled paths, and can explore to cover many appropriate reasoning paths. In this connection, efforts have been dedicated to investigating several RL formulations for multi-hop KG reasoning. Particularly, current RL-based methods generate rewards at the very end of the reasoning process, due to which short paths of hops less than a given threshold are likely to be overlooked, and the overall performance is impaired. To address the problem, we propose , a revised RL formulation of multi-hop KG reasoning that is characterized by two novel designs—the stop signal and the worth-trying signal. The stop signal instructs the agent of RL to stay at the entity after finding the answer, preventing from hopping further even if the threshold is not reached; meanwhile, the worth-trying signal encourages the agent to try to learn some partial patterns from the paths that fail to lead to the answer. To validate the design of our model , comprehensive experiments are carried out on three benchmark knowledge graphs, and the results and analysis suggest the superiority of over state-of-the-art methods.


Author(s):  
Aritran Piplai ◽  
Priyanka Ranade ◽  
Anantaa Kotal ◽  
Sudip Mittal ◽  
Sandeep Nair Narayanan ◽  
...  

Decision ◽  
2016 ◽  
Vol 3 (2) ◽  
pp. 115-131 ◽  
Author(s):  
Helen Steingroever ◽  
Ruud Wetzels ◽  
Eric-Jan Wagenmakers

Sign in / Sign up

Export Citation Format

Share Document