Extending Behavior Trees with Market-Based Task Allocation in Dynamic Environments

Author(s):  
Tao Wang ◽  
Dianxi Shi ◽  
Wei Yi
2021 ◽  
Author(s):  
Ching-Wei Chuang ◽  
Harry H. Cheng

Abstract In the modern world, building an autonomous multi-robot system is essential to coordinate and control robots to help humans because using several low-cost robots becomes more robust and efficient than using one expensive, powerful robot to execute tasks to achieve the overall goal of a mission. One research area, multi-robot task allocation (MRTA), becomes substantial in a multi-robot system. Assigning suitable tasks to suitable robots is crucial in coordination, which may directly influence the result of a mission. In the past few decades, although numerous researchers have addressed various algorithms or approaches to solve MRTA problems in different multi-robot systems, it is still difficult to overcome certain challenges, such as dynamic environments, changeable task information, miscellaneous robot abilities, the dynamic condition of a robot, or uncertainties from sensors or actuators. In this paper, we propose a novel approach to handle MRTA problems with Bayesian Networks (BNs) under these challenging circumstances. Our experiments exhibit that the proposed approach may effectively solve real problems in a search-and-rescue mission in centralized, decentralized, and distributed multi-robot systems with real, low-cost robots in dynamic environments. In the future, we will demonstrate that our approach is trainable and can be utilized in a large-scale, complicated environment. Researchers might be able to apply our approach to other applications to explore its extensibility.


2018 ◽  
Author(s):  
Rui Chen ◽  
Bernd Meyer ◽  
Julian García

AbstractSocial insect colonies are capable of allocating their workforce in a decentralised fashion; addressing a variety of tasks and responding effectively to changes in the environment. This process is fundamental to their ecological success, but the mechanisms behind it remain poorly understood. While most models focus on internal and individual factors, empirical evidence highlights the importance of ecology and social interactions. To address this gap we propose a game theoretical model of task allocation. Individuals are characterised by a trait that determines how they split their energy between two prototypical tasks: foraging and regulation. To be viable, a colony needs to learn to adequately allocate its workforce between these two tasks. We study two different processes: individuals can learn relying exclusively on their own experience, or by using the experiences of others via social learning. We find that social organisation can be determined by the ecology alone, irrespective of interaction details. Weakly specialised colonies in which all individuals tend to both tasks emerge when foraging is cheap; harsher environments, on the other hand, lead to strongly specialised colonies in which each individual fully engages in a single task. We compare the outcomes of self-organised task allocation with optimal group performance. Counter to intuition, strongly specialised colonies perform suboptimally, whereas the group performance of weakly specialised colonies is closer to optimal. Social interactions lead to important differences when the colony deals with dynamic environments. Colonies whose individuals rely on their own experience are more exible when dealing with change. Our computational model is aligned with mathematical predictions in tractable limits. This different kind of model is useful in framing relevant and important empirical questions, where ecology and interactions are key elements of hypotheses and predictions.


Author(s):  
David R. Schneider ◽  
Mark Campbell

Of the methods developed for Optimal Task Allocation, Mixed Integer Linear Programming (MILP) techniques are some of the most predominant. A new method, presented in this paper, is able to produce identical optimal solutions to the MILP techniques but in computation times orders of magnitude faster than MILP. This new method, referred to as G*TA, uses a minimum spanning forest algorithm to generate optimistic predictive costs in an A* framework, and a greedy approximation method to create upper bound estimates. A second new method which combines the G*TA and MILP methods, referred to as G*MILP, is also presented for its scaling potential. This combined method uses G*TA to solve a series of sub-problems and the final optimal task allocation is handled through MILP. All of these methods are compared and validated though a large series of real time tests using the Cornell RoboFlag testbed, a multi-robot, highly dynamic test environment.


Author(s):  
Luke Johnson ◽  
Sameera Ponda ◽  
Han-Lim Choi ◽  
Jonathan How

2009 ◽  
Author(s):  
Sean C. Mondesire ◽  
Annie S. Wu ◽  
Misty Blowers ◽  
John C. Sciortino, Jr.

2009 ◽  
Author(s):  
Sallie J. Weaver ◽  
Rebecca Lyons ◽  
Eduardo Salas ◽  
David A. Hofmann

Sign in / Sign up

Export Citation Format

Share Document