scholarly journals Integration of the ST language in a model-based engineering environment for control systems: An approach for compiler implementation

2008 ◽  
Vol 5 (2) ◽  
pp. 87-101 ◽  
Author(s):  
Elisabete Ferreira ◽  
Rogério Paulo ◽  
Cruz da ◽  
Pedro Henriques

In the context of the INTEGRA project, compilation and code generation features for behavior definition are to be integrated in an existing model-based engineering environment for control systems. The devised compiler architecture is domain-specific and provides support for multiple input languages and multiple target platforms. In this paper we discuss an architectural approach in which the compiling process is organized in two different stages: the compiling stage and the linking stage. The compiling stage generates target independent code from possibly multiple input languages. The linking stage assembles precompiled code modules and generates a target specific executable code for a given virtual machine. To be more specific this paper describes the integration of the ST language in the tool core meta-model and the ST compiler is presented as an application case study. .

Author(s):  
Jan Peleska ◽  
Johannes Feuser ◽  
Anne E. Haxthausen

A novel approach to managing development, verification, and validation artifacts for the European Train Control System as open, publicly available items is analyzed and discussed with respect to its implications on system safety, security, and certifiability. After introducing this so-called model-driven openETCS approach, a threat analysis is performed, identifying both safety and security hazards that may be common to all model-based development paradigms for safety-critical railway control systems, or specific to the openETCS approach. In the subsequent sections state-of-the-art methods suitable to counter these threats are reviewed, and novel promising research results are described. These research results comprise domain-specific modeling, model-based code generation in combination with automated object code verification and explicit utilization of virtual machines to ensure containment of security hazards.


Author(s):  
Koldo Zuniga ◽  
Thomas P. Schmitt ◽  
Herve Clement ◽  
Joao Balaco

Correction curves are of great importance in the performance evaluation of heavy duty gas turbines (HDGT). They provide the means by which to translate performance test results from test conditions to the rated conditions. The correction factors are usually calculated using the original equipment manufacturer (OEM) gas turbine thermal model (a.k.a. cycle deck), varying one parameter at a time throughout a given range of interest. For some parameters bi-variate effects are considered when the associated secondary performance effect of another variable is significant. Although this traditional approach has been widely accepted by the industry, has offered a simple and transparent means of correcting test results, and has provided a reasonably accurate correction methodology for gas turbines with conventional control systems, it neglects the associated interdependence of each correction parameter from the remaining parameters. Also, its inherently static nature is not well suited for today’s modern gas turbine control systems employing integral gas turbine aero-thermal models in the control system that continuously adapt the turbine’s operating parameters to the “as running” aero-thermal component performance characteristics. Accordingly, the most accurate means by which to correct the measured performance from test conditions to the guarantee conditions is by use of Model-Based Performance Corrections, in agreement with the current PTC-22 and ISO 2314, although not commonly used or accepted within the industry. The implementation of Model-based Corrections is presented for the Case Study of a GE 9FA gas turbine upgrade project, with an advanced model-based control system that accommodated a multitude of operating boundaries. Unique plant operating restrictions, coupled with its focus on partial load heat rate, presented a perfect scenario to employ Model-Based Performance Corrections.


2012 ◽  
Vol 9 (1) ◽  
pp. 47-70 ◽  
Author(s):  
Tod A. Sedbrook

ABSTRACT Developing a domain specific language (DSL) to express business policies requires modeling tools for eliciting, applying, and maintaining the knowledge of business experts. This study defines a DSL meta-model and prototype to create visual business models that conform to the Resource, Event, Agent-Enterprise Ontology (REA-EO). The meta-model specifies REA-EO modeling components, and the prototype provides a visual interface to design operational and policy-level models. Code-generation templates then transform design models into executable code that supports business applications. The study describes the capabilities of the prototype and validates its use in the context of a business case. Data Availability: The paper's software modeling prototype and its companion code-generation templates are available for research purposes as open-source Visual Studio extensions and are available by contacting the author.


2010 ◽  
Vol 13 (1) ◽  
Author(s):  
Andrés Vignaga

Global Model Management (GMM) is a model-based approach for managing large sets ofinterrelated heterogeneous and complex MDE artifacts. Such artifacts are usually representedas models, however as many Domain Specific Languages have a textual concrete syntax,GMM also supports textual entities and model-to-text/text-to-model transformations whichare projectors that bridge the MDE technical space and the Grammarware technical space. Asthe transformations supported by GMM are executable artifacts, typing is critical forpreventing type errors during execution. We proposed the cGMM calculus which formalizesthe notion of typing in GMM. In this work, we extend cGMM with new types and rules forsupporting textual entities and projectors. With such an extension, those artifacts mayparticipate in transformation compositions addressing larger transformation problems. Weillustrate the new constructs in the context of an interoperability case study.


2015 ◽  
Vol 25 (09n10) ◽  
pp. 1739-1741
Author(s):  
Daniel Adornes ◽  
Dalvan Griebler ◽  
Cleverson Ledur ◽  
Luiz Gustavo Fernandes

MapReduce was originally proposed as a suitable and efficient approach for analyzing and processing large amounts of data. Since then, many researches contributed with MapReduce implementations for distributed and shared memory architectures. Nevertheless, different architectural levels require different optimization strategies in order to achieve high-performance computing. Such strategies in turn have caused very different MapReduce programming interfaces among these researches. This paper presents some research notes on coding productivity when developing MapReduce applications for distributed and shared memory architectures. As a case study, we introduce our current research on a unified MapReduce domain-specific language with code generation for Hadoop and Phoenix++, which has achieved a coding productivity increase from 41.84% and up to 94.71% without significant performance losses (below 3%) compared to those frameworks.


Sign in / Sign up

Export Citation Format

Share Document