A collaborative distributed multi-agent reinforcement learning technique for dynamic agent shortest path planning via selected sub-goals in complex cluttered environments

Author(s):  
D. B. Megherbi ◽  
M. Kim
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 146264-146272 ◽  
Author(s):  
Han Qie ◽  
Dianxi Shi ◽  
Tianlong Shen ◽  
Xinhai Xu ◽  
Yuan Li ◽  
...  

Author(s):  
Nafiseh Masoudi ◽  
Georges M. Fadel ◽  
Margaret M. Wiecek

Abstract Routing or path-planning is the problem of finding a collision-free and preferably shortest path in an environment usually scattered with polygonal or polyhedral obstacles. The geometric algorithms oftentimes tackle the problem by modeling the environment as a collision-free graph. Search algorithms such as Dijkstra’s can then be applied to find an optimal path on the created graph. Previously developed methods to construct the collision-free graph, without loss of generality, explore the entire workspace of the problem. For the single-source single-destination planning problems, this results in generating some unnecessary information that has little value and could increase the time complexity of the algorithm. In this paper, first a comprehensive review of the previous studies on the path-planning subject is presented. Next, an approach to address the planar problem based on the notion of convex hulls is introduced and its efficiency is tested on sample planar problems. The proposed algorithm focuses only on a portion of the workspace interacting with the straight line connecting the start and goal points. Hence, we are able to reduce the size of the roadmap while generating the exact globally optimal solution. Considering the worst case that all the obstacles in a planar workspace are intersecting, the algorithm yields a time complexity of O(n log(n/f)), with n being the total number of vertices and f being the number of obstacles. The computational complexity of the algorithm outperforms the previous attempts in reducing the size of the graph yet generates the exact solution.


Author(s):  
Samir Lahouar ◽  
Said Zeghloul ◽  
Lotfi Romdhane

In this paper we propose a new path planning method for robot manipulators in cluttered environments, based on lazy grid sampling. Grid cells are built while searching for the path to the goal configuration. The proposed planner acts in two modes. A depth mode, while the robot is far from obstacles, makes it evolve toward its goal. While a width search mode becomes active when the robot gets close to an obstacle. This method provides the shortest path to go around the obstacle. It also reduces the gap between pre-computed grid methods and lazy grid methods. No heuristic function is needed to guide the search process. An example dealing with a robot in a cluttered environment is presented to show the efficiency of the method.


2020 ◽  
Vol 1584 ◽  
pp. 012006
Author(s):  
Xiaoqi Wang ◽  
Lina Jin ◽  
Haiping Wei

2014 ◽  
pp. 39-44
Author(s):  
Anton Kabysh ◽  
Vladimir Golovko ◽  
Arunas Lipnickas

This paper describes a multi-agent influence learning approach and reinforcement learning adaptation to it. This learning technique is used for distributed, adaptive and self-organizing control in multi-agent system. This technique is quite simple and uses agent’s influences to estimate learning error between them. The best influences are rewarded via reinforcement learning which is a well-proven learning technique. It is shown that this learning rule supports positive-reward interactions between agents and does not require any additional information than standard reinforcement learning algorithm. This technique produces optimal behavior of multi-agent system with fast convergence patterns.


Sign in / Sign up

Export Citation Format

Share Document