scholarly journals On Numerical 2D P Colonies Modelling the Grey Wolf Optimization Algorithm

Processes ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 330
Author(s):  
Daniel Valenta ◽  
Miroslav Langer

The 2D P colonies is a version of the P colonies with a two-dimensional environment designed for observing the behavior of the community of very simple agents living in the shared environment. Each agent is equipped with a set of programs consisting of a small number of simple rules. These programs allow the agent to act and move in the environment. The 2D P colonies have been shown to be suitable for the simulations of various (not only) multi-agent systems, and natural phenomena, like flash floods. The Grey wolf algorithm is the optimization-based algorithm inspired by social dynamics found in packs of grey wolves and by their ability to create hierarchies, in which every member has a clearly defined role, dynamically. In our previous papers, we extended the 2D P colony by the universal communication device, the blackboard. The blackboard allows for the agents to share various information, e.g., their position or the information about their surroundings. In this paper, we follow our previous research on the numerical 2D P colony with the blackboard. We present the computer simulator of the numerical 2D P colony with the blackboard and the results of the computer simulation, and we compare these results with the original algorithm.

Author(s):  
Ronen Nir ◽  
Erez Karpas

Designing multi-agent systems, where several agents work in a shared environment, requires coordinating between the agents so they do not interfere with each other. One of the canonical approaches to coordinating agents is enacting a social law, which applies restrictions on agents’ available actions. A good social law prevents the agents from interfering with each other, while still allowing all of them to achieve their goals. Recent work took the first step towards reasoning about social laws using automated planning and showed how to verify if a given social law is robust, that is, allows all agents to achieve their goals regardless of what the other agents do. This work relied on a classical planning formalism, which assumed actions are instantaneous and some external scheduler chooses which agent acts next. However, this work is not directly applicable to multi-robot systems, because in the real world actions take time and the agents can act concurrently. In this paper, we show how the robustness of a social law in a continuous time setting can be verified through compilation to temporal planning. We demonstrate our work both theoretically and on real robots.


2015 ◽  
Vol 16 (1) ◽  
pp. 176
Author(s):  
Fatiha Aityacine ◽  
Badr Hssina ◽  
Belaid Bouikhalene

In this article, we present a multi-agent approach that aims to design, modeling and implementation of an application "smart school". Indeed Several institutions adopt the computerized management of education to meet the needs of students using multi-agent systems. They have the ability to act simultaneously in a shared environment. The purpose of this approach is to automate some administrative services of education, based on the theory of distributed artificial intelligence (DAI) and multi-agent systems (MAS). This multi-agent application integrates entities called agents that cooperate and communicate them to perform specific tasks. Our system is based on the middleware JADE (Java Agent DEvelopment Framework) used for the implementation and agents management. This model based on multi-agent systems is tested on the personal data of an experiment conducted with the students of Sultan Moulay Slimane University in Beni Mellal.


Author(s):  
Sho Yamauchi ◽  
◽  
Hidenori Kawamura ◽  
Keiji Suzuki

Flocking algorithms for multi-agent systems are distributed algorithms that generate complex formational movement despite having simple rules for each agent. These algorithms, known as swarmintelligence, are flexible and robust. However, to exploit these features to generate flexible behavior in an autonomous system, greater flexibility is needed. To achieve this, these algorithms are extended to enable arbitrary lattice formation. In addition, extended flocking algorithms can be assumed to be the aggregation of oscillators and observed the behavior of synchronization. It is difficult to explain the behavior of extended flocking algorithms as a consensus problem but, by assuming the flock as the set of oscillators, it can be explained as a synchronization phenomenon.


2021 ◽  
Author(s):  
Sabine Topf ◽  
Maarten Speekenbrink

Stigmergy refers to the coordination of agents via artifacts of behaviours (behavioural traces) in the shared environment. Whilst primarily studied in biology and computer science/robotics, stigmergy underlies many human indirect interactions, both offline (e.g., trail building) and online (e.g., development of open-source software). In this review, we provide an introduction to stigmergy and emphasise how and where human stigmergy is distinct from animal or robot stigmergy, such as intentional communication via traces and causal inferences from the traces to the causing behaviour. Cognitive processes discussed on the agent level include attention, motivation, meaning and meta-cognition, as well as emergence/immergence, iterative learning and exploration/exploitation at the interface of individual agent and multi-agent systems. Characteristics of one-agent, two-agent and multi-agent systems are discussed and areas for future research highlighted.


2012 ◽  
Vol 13 (2) ◽  
pp. 149-173 ◽  
Author(s):  
AGOSTINO DOVIER ◽  
ANDREA FORMISANO ◽  
ENRICO PONTELLI

AbstractThe paper presents a knowledge representation formalism, in the form of a high-levelAction Description Language (ADL)for multi-agent systems, where autonomous agents reason and act in a shared environment. Agents are autonomously pursuing individual goals, but are capable of interacting through a shared knowledge repository. In their interactions through shared portions of the world, the agents deal with problems of synchronization and concurrency; the action language allows the description of strategies to ensure a consistent global execution of the agents’ autonomously derived plans. A distributed planning problem is formalized by providing the declarative specifications of the portion of the problem pertaining to a single agent. Each of these specifications is executable by a stand-alone CLP-based planner. The coordination among agents exploits a Linda infrastructure. The proposal is validated in a prototype implementation developed in SICStus Prolog.


Sign in / Sign up

Export Citation Format

Share Document