scholarly journals Business Models for Distributed-Simulation Orchestration and Risk Management

Information ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 71
Author(s):  
Simon Gorecki ◽  
Jalal Possik ◽  
Gregory Zacharewicz ◽  
Yves Ducq ◽  
Nicolas Perry

Nowadays, industries are implementing heterogeneous systems from different domains, backgrounds, and operating systems. Manufacturing systems are becoming more and more complex, which forces engineers to manage the complexity in several aspects. Technical complexities bring interoperability, risk management, and hazards issues that must be taken into consideration, from the business model design to the technical implementation. To solve the complexities and the incompatibilities between heterogeneous components, several distributed and cosimulation standards and tools can be used for data exchange and interconnection. High-level architecture (HLA) and functional mockup interface (FMI) are the main international standards used for distributed and cosimulation. HLA is mainly used in academic and defense domains while FMI is mostly used in industry. In this article, we propose an HLA/FMI implementation with a connection to an external business process-modeling tool called Papyrus. Papyrus is configured as a master federate that orchestrates the subsimulations based on the above standards. The developed framework is integrated with external heterogeneous components through an FMI interface. This framework is developed with the aim of bringing interoperability to a system used in a power generation company.

2009 ◽  
Author(s):  
K. A. McTaggart ◽  
R. G. Langlois

Replenishment at sea is essential for sustainment of naval operations away from home ports. This paper describes physics-based simulation of the transfer of solid payloads between two ships. For a given operational scenario, the simulation can determine whether events such as breakage of replenishment gear or immersion of payload in the ocean will occur. The simulation includes detailed modelling of the replenishment gear and ship motions. Distributed simulation using the High Level Architecture facilitates time management and data exchange among simulation components.


Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


Author(s):  
Fouzia Ounnar ◽  
Patrick Pujo ◽  
Selma Limam Mansar

Contrary to actual logistics networks in which chains are frozen, in the proposed partnership network, a dynamic chain is only built each time an order is requested; nothing is planned ahead of time. An isoarchic control model based on the holonic paradigm is proposed. The control of the partnership network can be seen through a simultaneous analysis of the holon views. The proposed control is based on a multicriteria analysis method by complete aggregation (Analytic Hierarchy Process (AHP)). The assignment of orders is based on the search for the best response to a Call For Proposals submitted by a customer. The solution that appears to be the most efficient in terms of the evaluation criteria will be adopted. For validation purposes, a simulation of the proposed approach was implemented using a distributed simulation environment HLA (High Level Architecture). A set of realistic tests were used to evaluate the proposed approach.


Author(s):  
Edwin Dado ◽  
Reza Beheshti ◽  
Martinus van de Ruitenbeek

This chapter provides an overview of product modelling in the Building and Construction (BC) industry based on authors’ experiences gained from various conducted research projects and also taking into account results of other research projects. This chapter starts with an introduction and background of the subject area in terms of motivation, industrial needs and requirements. This is followed by an overview of a historical background of the subject area. In this historical background we distinguish five generations of product modelling developments. The first generation of product modelling developments is characterized by the influence of previous expert and database developments and by the constituting high-level constructs (e.g. EDM, BSM, RATAS and GARM). The second generation of product modelling developments can be characterized by the development of detailed aspect systems and supporting frameworks for data exchange and integration (e.g. IRMA, ATLAS, COMBINE, PISA and IMPPACT). The third generation product modelling developments can be characterized by its focus on collaborative engineering support by means of the application of middleware and client/server technology (e.g. SPACE, CONCUR, BCCM, VEGA and ToCEE) and the development of the IFC. The fourth generation of product modelling developments is heavily influenced by the Internet and Web Services standards such as XML, SOAP and UDDI and related business models such as eBusiness and eWork (e.g. bcXML, ifcXML and eConstruct). The next (fifth) generation of product modelling developments will be based on the emerging semantic web standards such as OWL and RDF, and based on the concepts of ontology internationmodelling as experienced in ongoing (European) projects such as SWOP. After this historical overview, an analysis of the characteristics of interesting conceptual product approaches is presented. Here we discuss the Standardisation, Minimal Model, Core Model, NOT, Vocabulary and Ontology product modelling approaches. Followed by an analysis of a number of specific conceptual product models and how the basic product modelling constructs (i.e. semantics, lifecycle modifiers and multiple project views) are implemented. This chapter ends with a discussion about some ongoing projects (COINS, CHEOPS and SWOP) in the context of future trends.


Information ◽  
2020 ◽  
Vol 11 (10) ◽  
pp. 469
Author(s):  
Mario Marin ◽  
Gene Lee ◽  
Jaeho Kim

Multiple resolution modeling (MRM) is the future of distributed simulation. This article describes different definitions and notions related to MRM. MRM is a relatively new research area, and there is a demand for simulator integration from a modeling complexity point of view. This article also analyzes a taxonomy based on the experience of the researchers in detail. Finally, an example that uses the high-level architecture (HLA) is explained to illustrate the above definitions and, in particular, to look at the problems that are common to these distributed simulation configurations. The steps required to build an MRM distributed simulation system are introduced. The conclusions describe the lessons learned for this unique form of distributed simulation.


2020 ◽  
Vol 12 (17) ◽  
pp. 6969
Author(s):  
Simon Gorecki ◽  
Jalal Possik ◽  
Gregory Zacharewicz ◽  
Yves Ducq ◽  
Nicolas Perry

In order to control manufacturing systems, managers need risk and performance evaluation methods and simulation tools. However, these simulation techniques must evolve towards being multiperformance, multiactor, and multisimulation tools, and this requires interoperability between those distributed components. This paper presents an integrated platform that brings interoperability to several simulation components. This work expands the process modeling tool Papyrus to allow it to communicate with external components through both distributed simulation and cosimulation standards. The distributed modeling and simulation framework (DMSF) platform takes its environment into consideration in order to evaluate the sustainability of the system while integrating external heterogeneous components. For instance, a DMSF connection with external IoT devices has been implemented. Moreover, the orchestration of different smart manufacturing components and services is achieved through configurable business models. As a result, an automotive industry case study has successfully been tested to demonstrate the sustainability of smart supply chains and manufacturing factories, allowing better connectivity with their real environments.


Author(s):  
Tom van den Berg ◽  
Barry Siegel ◽  
Anthony Cramp

NATO and the nations use distributed simulation environments for various purposes, such as training, mission rehearsal, and decision support in acquisition processes. Consequently, modeling and simulation (M&S) has become a critical technology for the coalition and its nations. Achieving interoperability between participating simulation systems and ensuring credibility of results currently often requires enormous effort with regards to time, personnel, and budget. Recent technical developments in the area of cloud computing technology and service oriented architecture (SOA) may offer opportunities to better utilize M&S capabilities in order to satisfy NATO critical needs. A new concept that includes service orientation and the provision of M&S applications via the as-a-service model of cloud computing may enable composable simulation environments that can be deployed rapidly and on-demand. This new concept is known as M&S as a Service (MSaaS). There has also been the recent emergence of containerization as an alternative to virtualization. Containerization is the process of creating, packaging, distributing, deploying, and executing applications in a lightweight and standardized process execution environment known as a container. Because containers are, in principle, lightweight, they are suitable to serve as the vehicle for the provision of packaged (micro)services. Service orientation is an approach to the design of heterogeneous, distributed systems in which solution logic is structured in the form of interoperating services. This paper investigates various aspects of service orientation and containerization including simulation composition, networking, discovery, scalability, and overall performance. This investigation provides background information on the topics of service orientation, containerization, and Docker – a technology ecosystem for working with containers. A case study is presented for the use of Docker in support of a training simulation based on the high level architecture (HLA). The HLA is an IEEE standard architecture for distributed simulation environments that was originally developed for defense applications. The case study introduces a number of training use cases, and shows how Docker can be used to assist in their implementation. The performance impact of running a simulation within container technology is also investigated. The application of container technology to HLA-based simulations as presented in this paper is novel. The motivation for looking at this topic stems from the activity being conducted within NATO MSG-136.


Author(s):  
Maxwell Scale Uwadia Osagie ◽  
Amenze Joy Osagie

Botnet is one of the numerous attacks ravaging the networking environment. Its approach is said to be brutal and dangerous to network infrastructures as well as client systems. Since the introduction of botnet, different design methods have been employed to solve the divergent approach but the method of taking over servers and client systems is unabated. To solve this, we first identify Mpack, ICEpack and Fiesta as enhanced IRC tool. The analysis of its role in data exchange using OSI model was carried out. This further gave the needed proposal to the development of a High level architecture representing the structural mechanism and the defensive mechanism within network server so as to control the botnet trend. Finally, the architecture was designed to respond in a proactive state when scanning and synergizing the double data verification modules in an encapsulation manner within server system.


Author(s):  
Vasily Y. Kharitonov

Distributed virtual reality systems (DVR systems) represent one of the most intensively developing branches of distributed simulation technology to date. Examples of such systems include various human-in-the-loop applications for training, educational and entertainment purposes. Modern DVR systems require sophisticated data exchange mechanisms to provide consistent and at the same time responsive interaction of a large number of heterogeneous components. While many DVR systems have been implemented in the past decade, there is still exists a lack of universal, easily deployable and extensible framework that enables rapid creation of complete systems from scratch. In this work we present the TerraNet framework which is a middleware allowing an application developer to easily implement and deploy medium-sized DVR systems for specific tasks without direct low-level network programming. TerraNet framework provides a high-level application programming interface to create, manage and distribute objects in a shared virtual environment. In paper we discuss overall framework system architecture, its basic features and functionality, as well as possible practical applications.


Sign in / Sign up

Export Citation Format

Share Document