scholarly journals Automotive IVHM: Towards Intelligent Personalised Systems Healthcare

Author(s):  
Felician Campean ◽  
Daniel Neagu ◽  
Aleksandr Doikin ◽  
Morteza Soleimani ◽  
Thomas Byrne ◽  
...  

AbstractUnderpinned by a contemporary view of automotive systems as cyber-physical systems, characterised by progressively open architectures increasingly defined by their interaction with the users and the smart environment, this paper provides a critical and up-to-date review of automotive Integrated Vehicle Health Management (IVHM) systems. The paper discusses the challenges with prognostics and intelligent health management of automotive systems, and proposes a high-level framework, referred to as the Automotive Healthcare Analytic Factory, to systematically collect and process heterogeneous data from across the product lifecycle, towards actionable insight for personalised healthcare of systems.

10.29007/pld3 ◽  
2018 ◽  
Author(s):  
Kristin Yvonne Rozier

The need for runtime verification (RV), and tools that enable RV in practice, is widely recognized. Systems that need to operate autonomously necessitate on-board RV technolo- gies, from Mars rovers that need to sustain operation despite delayed communication from operators on Earth, to Unmanned Aerial Systems (UAS) that must fly without a human on-board, to robots operating in dynamic or hazardous environments that must take care to preserve both themselves and their surroundings. Enabling all forms of autonomy, from tele-operation to automated control to decision-making to learning, requires some ability for the autonomous system to reason about itself. The broader class of safety-critical systems require means of runtime self-checking to ensure their critical functions have not degraded during use.Runtime verification addresses a vital need for self-referential reasoning and system health management, but there is not currently a generalized approach that answers the lower-level questions. What are the inputs to RV? What are the outputs? What level(s) of the system do we need RV tools to verify, from bits and sensor signals to high-level architectures, and at what temporal frequency? How do we know our runtime verdicts are correct? How do the answers to these questions change for software, hardware, or cyber-physical systems (CPS)? How do we benchmark RV tools to assess their (comparative) suitability for particular platforms? The goal of this position paper is to fuel the discussion of ways to improve how we evaluate and compare tools for runtime verification, particularly for cyber-physical systems.


2021 ◽  
Vol 21 (2) ◽  
pp. 1-25
Author(s):  
Pin Ni ◽  
Yuming Li ◽  
Gangmin Li ◽  
Victor Chang

Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.


2019 ◽  
Vol 1 (2) ◽  
pp. 19-37
Author(s):  
K. Sridhar Patnaik ◽  
Itu Snigdh

Cyber-physical systems (CPS) is an exciting emerging research area that has drawn the attention of many researchers. However, the difficulties of computing and physical paradigm introduce a lot of trials while developing CPS, such as incorporation of heterogeneous physical entities, system verification, security assurance, and so on. A common or unified architecture plays an important role in the process of CPS design. This article introduces the architectural modeling representation of CPS. The layers of models are integrated from high level to lower level to get the general Meta model. Architecture captures the essential attributes of a CPS. Despite the rapid growth in IoT and CPS a general principled modeling approach for the systematic development of these new engineering systems is still missing. System modeling is one of the important aspects of developing abstract models of a system wherein, each model represents a different view or perspective of that system. With Unified Modeling Language (UML), the graphical analogy of such complex systems can be successfully presented.


Author(s):  
Thomas J Byrne ◽  
Aleksandr Doikin ◽  
Felician Campean ◽  
Daniel Neagu

AbstractAdvancing Industry 4.0 concepts by mapping the product of the automotive industry on the spectrum of Cyber Physical Systems, we immediately recognise the convoluted processes involved in the design of new generation vehicles. New technologies developed around the communication core (IoT) enable novel interactions with data. Our framework employs previously untapped data from vehicles in the field for intelligent vehicle health management and knowledge integration into design. Firstly, the concept of an inter-disciplinary artefact is introduced to support the dynamic alignment of disparate functions, so that cyber variables change when physical variables change. Secondly, the axiomatic categorisation (AC) framework simulates functional transformations from artefact to artefact, to monitor and control automotive systems rather than components. Herein, an artefact is defined as a triad of the physical and engineered component, the information processing entity, and communication devices at their interface. Variable changes are modelled using AC, in conjunction with the artefacts, to aggregate functional transformations within the conceptual boundary of a physical system of systems.


Author(s):  
Rajit Nair ◽  
Preeti Nair ◽  
Vidya Kant Dwivedi

Today, in cyber-physical systems, there is a transformation in which processing has been done on distributed mode rather than performing on centralized manner. Usually this type of approach is known as Edge computing, which demands hardware time to time when requirements in computing performance get increased. Considering this situation, we must remain energy efficient and adaptable. So, to meet the above requirements, SRAM-based FPGAs and their inherent run-time reconfigurability are integrated with smart power management strategies. Sometimes this approach fails in the case of user accessibility and easy development. This chapter presents an integrated framework to develop FPGA-based high-performance embedded systems for Edge computing in cyber-physical systems. The processing architecture will be based on hardware that helps us to manage reconfigurable systems from high level systems without any human intervention.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1736
Author(s):  
Davide Piumatti ◽  
Jacopo Sini ◽  
Stefano Borlo ◽  
Matteo Sonza Reorda ◽  
Radu Bojoi ◽  
...  

Complex systems are composed of numerous interconnected subsystems, each designed to perform specific functions. The different subsystems use many technological items that work together, as for the case of cyber-physical systems. Typically, a cyber-physical system is composed of different mechanical actuators driven by electrical power devices and monitored by sensors. Several approaches are available for designing and validating complex systems, and among them, behavioral-level modeling is becoming one of the most popular. When such cyber-physical systems are employed in mission- or safety-critical applications, it is mandatory to understand the impacts of faults on them and how failures in subsystems can propagate through the overall system. In this paper, we propose a methodology for supporting the failure mode, effects, and criticality analysis (FMECA) aimed at identifying the critical faults and assessing their effects on the overall system. The end goal is to analyze how a fault affecting a single subsystem possibly propagates through the whole cyber-physical system, considering also the embedded software and the mechanical elements. In particular, our approach allows the analysis of the propagation through the whole system (working at high level) of a fault injected at low level. This paper provides a solution to automate the FMECA process (until now mainly performed manually) for complex cyber-physical systems. It improves the failure classification effectiveness: considering our test case, it reduced the number of critical faults from 10 to 6. The remaining four faults are mitigated by the cyber-physical system architecture. The proposed approach has been tested on a real cyber-physical system in charge of driving a three-phase motor for industrial compressors, showing its feasibility and effectiveness.


SIMULATION ◽  
2020 ◽  
Vol 96 (9) ◽  
pp. 753-765 ◽  
Author(s):  
Seyed-Hosein Attarzadeh-Niaki ◽  
Ingo Sander

The growing complexity of embedded and cyber-physical systems makes the design of all system components from scratch increasingly impractical. Consequently, already from early stages of a design flow, designers rely on prior experience, which comes in the form of legacy code or third-party intellectual property (IP) blocks. Current approaches partly address the co-simulation problem for specific scenarios in an ad hoc style. This work suggests a general method for co-simulation of heterogeneous IPs with a system modeling and simulation framework. The external IPs can be integrated as high-level models running in an external simulator or as software- and hardware-in-the-loop simulation with minimal effort. Examples of co-simulation scenarios for wrapping models with different semantics are presented together with their practical usage in two case studies. The presented method is also used to formulate a refinement-by-replacement workflow for IP-based system design.


Author(s):  
Igor Vitalievich Kotenko ◽  
Igor Borisovich Parashchuk

The object of the study is methodological approaches to solving the problems of constructing membership functions in the application to decision-making procedures (decision support) for the fuzzy management of information and security events of modern cyber-physical systems. These methodological approaches (methods) allow taking into account the vagueness of the observed and controlled parameters of the protection of complex controlled technical systems. At the same time, the comparative analysis of the approaches under consideration is focused on the most applicable methods for specific tasks - the method of constructing membership functions based on the analysis of probability density functions and the method using a simple probabilistic scheme. Based on the method that uses the analysis of probability density functions, a mechanism for determining the values of membership functions for the problem of making decisions about the relevance of a particular computer attack to a fuzzy set of dangerous attacks (a set of attacks of a high level of danger) is proposed. This mechanism does not have a great mathematical and computational complexity, but it allows us to take into account the fuzziness of the observed and controlled security parameters, which will increase the reliability of monitoring information and security events within the framework of fuzzy security management of systems of this class


Sign in / Sign up

Export Citation Format

Share Document