scholarly journals Parallel Model-Based Diagnosis on Multi-Core Computers

2016 ◽  
Vol 55 ◽  
pp. 835-887 ◽  
Author(s):  
Dietmar Jannach ◽  
Thomas Schmitz ◽  
Kostyantyn Shchekotykhin

Model-Based Diagnosis (MBD) is a principled and domain-independent way of analyzing why a system under examination is not behaving as expected. Given an abstract description (model) of the system's components and their behavior when functioning normally, MBD techniques rely on observations about the actual system behavior to reason about possible causes when there are discrepancies between the expected and observed behavior. Due to its generality, MBD has been successfully applied in a variety of application domains over the last decades. In many application domains of MBD, testing different hypotheses about the reasons for a failure can be computationally costly, e.g., because complex simulations of the system behavior have to be performed. In this work, we therefore propose different schemes of parallelizing the diagnostic reasoning process in order to better exploit the capabilities of modern multi-core computers. We propose and systematically evaluate parallelization schemes for Reiter's hitting set algorithm for finding all or a few leading minimal diagnoses using two different conflict detection techniques. Furthermore, we perform initial experiments for a basic depth-first search strategy to assess the potential of parallelization when searching for one single diagnosis. Finally, we test the effects of parallelizing "direct encodings" of the diagnosis problem in a constraint solver.

Author(s):  
Huisi Zhou ◽  
Dantong Ouyang ◽  
Liming Zhang ◽  
Naiyu Tian
Keyword(s):  

2021 ◽  
Author(s):  
Dag Børre Lillestøl ◽  
Odd Torbjørn Kårvand ◽  
Are Torstensen

Abstract This paper outlines an approach on how to improve the mooring integrity of existing long term mooring systems by using existing and commercially available data. It will be demonstrated how the use of AIS and hindcast weather data can be used to increase understanding of mooring systems and to monitor and quantify gaps between "as-designed", "as-installed" and "as-is" of a long term mooring system. Long term moored units have traditionally suffered from many early failures, caused by damages and errors introduced in the installation phase, and costly and unnecessary "late in life" failures. A fact rated high on the agenda of the underwriters. Numerous papers have been written on this topic, but it is only in recent years the industry have started to ensure that systems are inspected to a sufficient degree with respect to the physical condition, taking these learnings into account. However, the second important element, the calibration of the mooring analysis vs. actual vessel and mooring system behavior/performance, have not yet gotten the attention required. Deviations from the intended design are introduced in the installation phase of a mooring system. In addition, the design assumptions will never be fully accurate. The gap between the design assumptions and the actual system will increase over time, and the industry today do not focus on mapping and quantifying the effect of this gap sufficiently. The described method explains how one can introduce a pro-active approach, without installing onboard equipment, but rather utilizing algorithms on existing data and design documentation. This paper focuses on the use of AIS data in combination with historic weather/environmental data and seek to demonstrate how this low-cost method can provide useful information with respect to the mooring system. To emphasize the mapped importance of such calibrations, the July 2021 Edition of the in-service DNV Class Rules, DNVGL-OS-0300, formally introduces requirements to calibration of design assumptions of long term mooring units through use of survey data, service history and actual mooring system behavior in order to ensure a unit's mooring system condition and performance is known in light of the original design assumptions.


Author(s):  
Alexander Hayward ◽  
Marian Daun ◽  
Ana Petrovska ◽  
Wolfgang Böhm ◽  
Lisa Krajinski ◽  
...  

AbstractThe evolution from traditional embedded systems to dynamically interacting, collaborative embedded systems increases the complexity and the number of requirements involved in the model-based development process. In this chapter, we present the new aspects that need to be considered when modeling functions for collaborative embedded systems and collaborative system groups, such as the relationship between functions of a single system and functions resulting from the interplay of multiple systems. These different aspects are represented by a formal, domain-independent metamodel. To aid understanding, we also apply the metamodel to two different use cases.


2021 ◽  
Vol 30 (1) ◽  
pp. 53-78
Author(s):  
Masood Ahmad ◽  
Rosmiwati Mohd-Mokhta

With the ongoing increase in complexity, less tolerance to performance degradation and safety requirements of practical systems has increased the necessity of fault detection (FD) as early as possible. During the last few decades, many research findings have been developed in fault diagnosis that addresses the issue of fault detection and isolation in linear and nonlinear systems. The paper’s objective is to present a survey on various state-of-art model-based FD techniques developed for linear time-invariant (LTI) systems for the interested readers to learn about recent development in this field. Model-based FD techniques for LTI systems are classified as parameter-estimation methods, parity-space-based methods, and observer-based methods. The background and recent progress, in context to fault detection, of each of these methods and their practical applications are discussed in this paper. Furthermore, two different FD techniques are compared via analytical equations and simulation results obtained from the DC motor model. In the end, possible future research directions in model-based FD, particularly for the LTI system, are highlighted for prosperous researchers. A comparison and emerging research topic make this contribution different from the existing survey papers on FD.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1866
Author(s):  
Kei Ohnishi ◽  
Kouta Hamano ◽  
Mario Koeppen

Recently, evolutionary algorithms that can efficiently solve decomposable binary optimization problems have been developed. They are so-called model-based evolutionary algorithms, which build a model for generating solution candidates by applying a machine learning technique to a population. Their central procedure is linkage detection that reveals a problem structure, that is, how the entire problem consists of sub-problems. However, the model-based evolutionary algorithms have been shown to be ineffective for problems that do not have relevant structures or those whose structures are hard to identify. Therefore, evolutionary algorithms that can solve both types of problems quickly, reliably, and accurately are required. The objective of the paper is to investigate whether the evolutionary algorithm evolving developmental timings (EDT) that we previously proposed can be the desired one. The EDT makes some variables values more quickly converge than the remains for any problems, and then, decides values of the remains to obtain a higher fitness value under the fixation of the variables values. In addition, factors to decide which variable values converge more quickly, that is, developmental timings are evolution targets. Simulation results reveal that the EDT has worse performance than the linkage tree genetic algorithm (LTGA), which is one of the state-of-the-art model-based evolutionary algorithms, for decomposable problems and also that the difference in the performance between them becomes smaller for problems with overlaps among linkages and also that the EDT has better performance than the LTGA for problems whose structures are hard to identify. Those results suggest that an appropriate search strategy is different between decomposable problems and those hard to decompose.


Author(s):  
Simona Bernardi ◽  
José Merseguer

Multi-formalism modeling techniques enable the modeling and analysis of different aspects of a system. One of the main issues in the integration of multiple tools to support multi-formalisms is how to provide a common method to report the results of the analysis and how to interchange them between models, based on different formalisms, that often represent the system behavior at different granularity levels. In this chapter, the authors focus on the Petri Net formalism, and they present a preliminary work toward the definition of a common XML-based language for the specification of the results obtained from the analysis of Petri net models. The authors use a meta-model based approach, where first a structured set of meta-models representing the Petri net result concepts and their relationships are defined. Then, model transformation rules enable the mapping of meta-models to XML constructs.


Sign in / Sign up

Export Citation Format

Share Document