DYTEST: a self-learning algorithm using dynamic testability measures to accelerate test generation

Author(s):  
W. Mao ◽  
M.D. Ciletti
2011 ◽  
Vol 38 (7) ◽  
pp. 642-651
Author(s):  
Wen-Qi Wu ◽  
Xiao-Bin ZHENG ◽  
Yong-Chu LIU ◽  
Kai TANG ◽  
Huai-Qiu ZHU

1991 ◽  
Author(s):  
Akinobu Moriyama ◽  
Isao Murase ◽  
Akira Shimozono ◽  
Tohru Takeuchi

2019 ◽  
Vol 283 ◽  
pp. 07001 ◽  
Author(s):  
Jingxi Wang ◽  
Chau Yuen ◽  
Yong Liang Guan ◽  
Fengxiang Ge

In this paper, we apply reinforcement learning, a significant area of machine learning, to formulate an optimal self-learning strategy to interact in an unknown and dynamically variable underwater channel. The dynamic and volatile nature of the underwater channel environment makes it impossible to employ pre-knowledge. In order to select the optimal parameters to transfer data packets, by using reinforcement learning, this problem could be resolved, and better throughput could be achieved without any environmental pre-information. The slow sound velocity in an underwater scenario, means that the delay of transmitting packet acknowledgement back to sender from the receiver is material, deteriorating the convergence speed of the reinforcement learning algorithm. As reinforcement learning requires a timely acknowledgement feedback from the receiver, in this paper, we combine a juggling-like ARQ (Automatic Repeat Request) mechanism with reinforcement learning to minimize the long-delayed reward feedback problem. The simulation is accomplished by OPNET.


Sign in / Sign up

Export Citation Format

Share Document