scholarly journals Optimal Agent Framework: A Novel, Cost-Effective Model Articulation to Fill the Integration Gap between Agent-Based Modeling and Decision-Making

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-30
Author(s):  
Abolfazl Taghavi ◽  
Sharif Khaleghparast ◽  
Kourosh Eshghi

Making proper decisions in today’s complex world is a challenging task for decision makers. A promising approach that can support decision makers to have a better understanding of complex systems is agent-based modeling (ABM). ABM has been developing during the last few decades as a methodology with many different applications and has enabled a better description of the dynamics of complex systems. However, the prescriptive facet of these applications is rarely portrayed. Adding a prescriptive decision-making (DM) aspect to ABM can support the decision makers in making better or, in some cases, optimized decisions for the complex problems as well as explaining the investigated phenomena. In this paper, first, the literature of DM with ABM is inquired and classified based on the methods of integration. Performing a scientometric analysis on the relevant literature lets us conclude that the number of publications attempting to integrate DM and ABM has not grown during the last two decades, while analysis of the current methodologies for integrating DM and ABM indicates that they have serious drawbacks. In this regard, a novel nature-inspired model articulation called optimal agent framework (OAF) has been proposed to ameliorate the disadvantages and enhance the realization of proper decisions in ABM at a relatively low computational cost. The framework is examined with the Bass diffusion model. The results of the simulation for the customized model developed by OAF have verified the feasibility of the framework. Moreover, sensitivity analyses on different agent populations, network structures, and marketing strategies have depicted the great potential of OAF to find the optimal strategies in various stochastic and unconventional conditions which have not been addressed prior to the implementation of the framework.

2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
C A Fergus ◽  
T Allen ◽  
M Parker ◽  
G Pearson ◽  
L Storer ◽  
...  

Abstract Background The linear theories of change which ground many interventions do not account for the complex processes and systems in which they are implemented. This reductionist approach prioritises statistical methods which do not accommodate the stochastic, non-linear, dynamic interactions between humans and their environment. The inclusion of practitioners in the process of evidence development and utilisation of complex systems methods mitigates these issues and results in locally relevant, timely evidence for decision-making. Methods The aim of this work was to develop localised evidence for decision-making for schistosomiasis control in Uganda, Malawi, and Tanzania. Workshops were conducted with practitioners from the Ministries of Health at various levels and partner organisations to identify evidence needs for their decision-making processes and perceptions of disease transmission and control activities. Participatory systems mapping was used to identify factors directly and indirectly related to transmission. The maps were synthesised to a master complex systems map, which served as the blueprint for a generalised spatial agent-based model and specific ABMs tailored to the evidence needs of decision-makers. Results There was a gap in available evidence for practitioners to advocate for resources within the MoH and government budgets, as well as intervention efficacy and resource allocation. The adaptable and data-inclusive characteristics of the AMBs made them well-suited to produce localised outputs. Converted to NetLogo with a tailored user interface, these models were appropriate and responsive to the needs of decision-makers from village to national levels and across country contexts. Conclusions Used together, participatory and agent-based modelling resulted in the development of responsive and relevant evidence for practitioner decision-making. This process is generalisable and transferable to other diseases and locations outside of those in this study. Key messages The use of participatory systems mapping to develop agent-based models resulted in relevant and timely evidence for practitioner decision-making. The approach used here is transferable and generalisable outside schistosomiasis control and the contexts in this study.


2021 ◽  
Vol 11 (21) ◽  
pp. 10397
Author(s):  
Barry Ezell ◽  
Christopher J. Lynch ◽  
Patrick T. Hester

Computational models and simulations often involve representations of decision-making processes. Numerous methods exist for representing decision-making at varied resolution levels based on the objectives of the simulation and the desired level of fidelity for validation. Decision making relies on the type of decision and the criteria that is appropriate for making the decision; therefore, decision makers can reach unique decisions that meet their own needs given the same information. Accounting for personalized weighting scales can help to reflect a more realistic state for a modeled system. To this end, this article reviews and summarizes eight multi-criteria decision analysis (MCDA) techniques that serve as options for reaching unique decisions based on personally and individually ranked criteria. These techniques are organized into a taxonomy of ratio assignment and approximate techniques, and the strengths and limitations of each are explored. We compare these techniques potential uses across the Agent-Based Modeling (ABM), System Dynamics (SD), and Discrete Event Simulation (DES) modeling paradigms to inform current researchers, students, and practitioners on the state-of-the-art and to enable new researchers to utilize methods for modeling multi-criteria decisions.


BMJ ◽  
2021 ◽  
pp. n1087
Author(s):  
Santiago Romero-Brufau ◽  
Ayush Chopra ◽  
Alex J Ryu ◽  
Esma Gel ◽  
Ramesh Raskar ◽  
...  

AbstractObjectiveTo estimate population health outcomes with delayed second dose versus standard schedule of SARS-CoV-2 mRNA vaccination.DesignSimulation agent based modeling study.SettingSimulated population based on real world US county.ParticipantsThe simulation included 100 000 agents, with a representative distribution of demographics and occupations. Networks of contacts were established to simulate potentially infectious interactions though occupation, household, and random interactions.InterventionsSimulation of standard covid-19 vaccination versus delayed second dose vaccination prioritizing the first dose. The simulation runs were replicated 10 times. Sensitivity analyses included first dose vaccine efficacy of 50%, 60%, 70%, 80%, and 90% after day 12 post-vaccination; vaccination rate of 0.1%, 0.3%, and 1% of population per day; assuming the vaccine prevents only symptoms but not asymptomatic spread (that is, non-sterilizing vaccine); and an alternative vaccination strategy that implements delayed second dose for people under 65 years of age, but not until all those above this age have been vaccinated.Main outcome measuresCumulative covid-19 mortality, cumulative SARS-CoV-2 infections, and cumulative hospital admissions due to covid-19 over 180 days.ResultsOver all simulation replications, the median cumulative mortality per 100 000 for standard dosing versus delayed second dose was 226 v 179, 233 v 207, and 235 v 236 for 90%, 80%, and 70% first dose efficacy, respectively. The delayed second dose strategy was optimal for vaccine efficacies at or above 80% and vaccination rates at or below 0.3% of the population per day, under both sterilizing and non-sterilizing vaccine assumptions, resulting in absolute cumulative mortality reductions between 26 and 47 per 100 000. The delayed second dose strategy for people under 65 performed consistently well under all vaccination rates tested.ConclusionsA delayed second dose vaccination strategy, at least for people aged under 65, could result in reduced cumulative mortality under certain conditions.


2021 ◽  
Author(s):  
Santiago Romero-Brufau ◽  
Ayush Chopra ◽  
Alex J Ryu ◽  
Esma Gel ◽  
Ramesh Raskar ◽  
...  

AbstractObjectivesTo estimate population health outcomes under delayedsecond dose versus standard schedule SARS-CoV-2 mRNA vaccination.DesignAgent-based modeling on a simulated population of 100,000 based on a real-world US county. The simulation runs were replicated 10 times. To test the robustness of these findings, simulations were performed under different estimates for single-dose efficacy and vaccine administration rates, and under the possibility that a vaccine prevents only symptoms but not asymptomatic spread.Settingpopulation level simulation.Participants100,000 agents are included in the simulation, with a representative distribution of demographics and occupations. Networks of contacts are established to simulate potentially infectious interactions though occupation, household, and random interactionsInterventionswe simulate standard Covid-19 vaccination, versus delayed-second-dose vaccination prioritizing first dose. Sensitivity analyses include first-dose vaccine efficacy of 70%, 80% and 90% after day 12 post-vaccination; vaccination rate of 0.1%, 0.3%, and 1% of population per day; assuming the vaccine prevents only symptoms but not asymptomatic spread; and an alternative vaccination strategy that implements delayed-second-dose only for those under 65 years of age.Main outcome measurescumulative Covid-19 mortality over 180 days, cumulative infections and hospitalizations.ResultsOver all simulation replications, the median cumulative mortality per 100,000 for standard versus delayed second dose was 226 vs 179; 233 vs 207; and 235 vs 236; for 90%, 80% and 70% first-dose efficacy, respectively. The delayed-second-dose strategy was optimal for vaccine efficacies at or above 80%, and vaccination rates at or below 0.3% population per day, both under sterilizing and non-sterilizing vaccine assumptions, resulting in absolute cumulative mortality reductions between 26 and 47 per 100,000. The delayed-second-dose for those under 65 performed consistently well under all vaccination rates tested.ConclusionsA delayed-second-dose vaccination strategy, at least for those under 65, could result in reduced cumulative mortality under certain conditions.


2016 ◽  
Vol 10 (4) ◽  
pp. 187-198 ◽  
Author(s):  
Orly Lahav ◽  
Nuha Chagab ◽  
Vadim Talis

Purpose The purpose of this paper is to examine a central need of students who are blind: the ability to access science curriculum content. Design/methodology/approach Agent-based modeling is a relatively new computational modeling paradigm that models complex dynamic systems. NetLogo is a widely used agent-based modeling language that enables exploration and construction of models of complex systems by programming and running the rules and behaviors. Sonification of variables and events in an agent-based NetLogo computer model of gas in a container is used to convey phenomena information. This study examined mainly two research topics: the scientific conceptual knowledge and systems reasoning that were learned as a result of interaction with the listen-to-complexity (L2C) environment as appeared in answers to the pre- and post-tests and the learning topics of kinetic molecular theory of gas in chemistry that was learned as a result of interaction with the L2C environment. The case study research focused on A., a woman who is adventitiously blind, for eight sessions. Findings The participant successfully completed all curricular assignments; her scientific conceptual knowledge and systems reasoning became more specific and aligned with scientific knowledge. Practical implications A practical implication of further studies is that they are likely to have an impact on the accessibility of learning materials, especially in science education for students who are blind, as equal access to low-cost learning environments that are equivalent to those used by sighted users would support their inclusion in the K-12 academic curriculum. Originality/value The innovative and low-cost learning system that is used in this research is based on transmittal of visual information of dynamic and complex systems, providing perceptual compensation by harnessing auditory feedback. For the first time the L2C system is based on sound that represents a dynamic rather than a static array. In this study, the authors explore how a combination of several auditory representations may affect cognitive learning ability.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-26
Author(s):  
Friederike Wall

Coordination among decision-makers of an organization, each responsible for a certain partition of an overall decision-problem, is of crucial relevance with respect to the overall performance obtained. Among the challenges of coordination in distributed decision-making systems (DDMS) is to understand how environmental conditions like, for example, the complexity of the decision-problem to be solved, the problem’s predictability and its dynamics shape the adaptation of coordination mechanisms. These challenges apply to DDMS resided by human decision-makers like firms as well as to systems of artificial agents as studied in the domain of multiagent systems (MAS). It is well known that coordination for increasing decision-problems and, accordingly, growing organizations is in a particular tension between shaping the search for new solutions and setting appropriate constraints to deal with increasing size and intraorganizational complexity. Against this background, the paper studies the adaptation of coordination in the course of growing decision-making organizations. For this, an agent-based simulation model based on the framework of NK fitness landscapes is employed. The study controls for different levels of complexity of the overall decision-problem, different strategies of search for new solutions, and different levels of cost of effort to implement new solutions. The results suggest that, with respect to the emerging coordination mode, complexity subtly interferes with the search strategy employed and cost of effort. In particular, results support the conjecture that increasing complexity leads to more hierarchical coordination. However, the search strategy shapes the predominance of hierarchy in favor of granting more autonomy to decentralized decision-makers. Moreover, the study reveals that the cost of effort for implementing new solutions in conjunction with the search strategy may remarkably affect the emerging form of coordination. This could explain differences in prevailing coordination modes across different branches or technologies or could explain the emergence of contextually inferior modes of coordination.


2020 ◽  
Vol 12 (22) ◽  
pp. 9306
Author(s):  
Nikolaos A. Skondras ◽  
Demetrios E. Tsesmelis ◽  
Constantina G. Vasilakou ◽  
Christos A. Karavitis

The terms ‘resilience’ and ‘vulnerability’ have been widely used, with multiple interpretations in a plethora of disciplines. Such a variety may easily become confusing, and could create misconceptions among the different users. Policy makers who are bound to make decisions in key spatial and temporal points may especially suffer from these misconceptions. The need for decisions may become even more pressing in times of crisis, where the weaknesses of a system are exposed, and immediate actions to enhance the systemic strengths should be made. The analysis framework proposed in the current effort, and demonstrated in hypothetical forest fire cases, tries to focus on the combined use of simplified versions of the resilience and vulnerability concepts. Their relations and outcomes are also explored, in an effort to provide decision makers with an initial assessment of the information required to deal with complex systems. It is believed that the framework may offer some service towards the development of a more integrated and applicable tool, in order to further expand the concepts of resilience and vulnerability. Additionally, the results of the framework can be used as inputs in other decision making techniques and approaches. This increases the added value of the framework as a tool.


Author(s):  
Tai-Tuck Yu ◽  
James P. Scanlan ◽  
Richard M. Crowder ◽  
Gary B. Wills

Discrete-event modeling has long been used for logistics and scheduling problems, while multi-agent modeling closely matches human decision-making process. In this paper, a metric-based comparison between the traditional discrete-event and the emerging agent-based modeling approaches is reported. The case study involved the implementation of two functionally identical models based on a realistic, nontrivial, civil aircraft gas turbine global repair operation. The size, structural complexity, and coupling metrics from the two models were used to gauge the benefits and drawbacks of each modeling paradigm. The agent-based model was significantly better than the discrete-event model in terms of execution times, scalability, understandability, modifiability, and structural flexibility. In contrast, and importantly in an engineering context, the discrete-event model guaranteed predictable and repeatable results and was comparatively easy to test because of its single-threaded operation. However, neither modeling approach on its own possesses all these characteristics nor can each handle the wide range of resolutions and scales frequently encountered in problems exemplified by the case study scenario. It is recognized that agent-based modeling can emulate high-level human decision-making and communication closely while discrete-event modeling provides a good fit for low-level sequential processes such as those found in manufacturing and logistics.


Author(s):  
Lin Qiu ◽  
Riyang Phang

Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.


Sign in / Sign up

Export Citation Format

Share Document