SLDP: Sequence learning dependency parsing model using long short-term memory

Author(s):  
Qing-Yu Zhou ◽  
De-Quan Zheng ◽  
Tie-Jun Zhao
2015 ◽  
Author(s):  
Chris Dyer ◽  
Miguel Ballesteros ◽  
Wang Ling ◽  
Austin Matthews ◽  
Noah A. Smith

2020 ◽  
Vol 143 (5) ◽  
Author(s):  
Alparslan Emrah Bayrak ◽  
Zhenghui Sha

Abstract Design can be viewed as a sequential and iterative search process. Fundamental understanding and computational modeling of human sequential design decisions are essential for developing new methods in design automation and human–AI collaboration. This paper presents an approach for predicting designers’ future search behaviors in a sequential design process under an unknown objective function by combining sequence learning with game theory. While the majority of existing studies focus on analyzing sequential design decisions from the descriptive and prescriptive point of view, this study is motivated to develop a predictive framework. We use data containing designers’ actual sequential search decisions under competition collected from a black-box function optimization game developed previously. We integrate the long short-term memory networks with the Delta method to predict the next sampling point with a distribution, and combine this model with a non-cooperative game to predict whether a designer will stop searching the design space or not based on their belief of the opponent’s best design. In the function optimization game, the proposed model accurately predicts 82% of the next design variable values and 92% of the next function values in the test data with an upper and lower bound, suggesting that a long short-term memory network can effectively predict the next design decisions based on their past decisions. Further, the game-theoretic model predicts that 60.8% of the participants stop searching for designs sooner than they actually do while accurately predicting when the remaining 39.2% of the participants stop. These results suggest that a majority of the designers show a strong tendency to overestimate their opponents’ performance, leading them to spend more on searching for better designs than they would have, had they known their opponents’ actual performance.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 49
Author(s):  
Michalina Strzyz ◽  
David Vilares ◽  
Carlos Gómez-Rodríguez

Dependency parsing has been built upon the idea of using parsing methods based on shift-reduce or graph-based algorithms in order to identify binary dependency relations between the words in a sentence. In this study we adopt a radically different approach and cast full dependency parsing as a pure sequence tagging task. In particular, we apply a linearization function to the tree that results in an output label for each token that conveys information about the word’s dependency relations. We then follow a supervised strategy and train a bidirectional long short-term memory network to learn to predict such linearized trees. Contrary to the previous studies attempting this, the results show that this approach not only leads to accurate but also fast dependency parsing. Furthermore, we obtain even faster and more accurate parsers by recasting the problem as multitask learning, with a twofold objective: to reduce the output vocabulary and also to exploit hidden patterns coming from a second parsing paradigm (constituent grammars) when used as an auxiliary task.


2020 ◽  
Author(s):  
Abdolreza Nazemi ◽  
Johannes Jakubik ◽  
Andreas Geyer-Schulz ◽  
Frank J. Fabozzi

Sign in / Sign up

Export Citation Format

Share Document