Engineering Model Independence

2018 ◽  
Vol 22 (2) ◽  
pp. 191-229
Author(s):  
Zachary Pirtle ◽  
Jay Odenbaugh ◽  
Andrew Hamilton ◽  
Zoe Szajnfarber ◽  

According to population biologist Richard Levins, every discipline has a “strategy of model building,” which involves implicit assumptions about epistemic goals and the types of abstractions and modeling approaches used. We will offer suggestions about how to model complex systems based upon a strategy focusing on independence in modeling. While there are many possible and desirable modeling strategies, we will contrast a model-independence-focused strategy with the more common modeling strategy of adding increasing levels of detail to a model. Levins calls the latter approach a ‘brute force’ strategy of modeling, which can encounter problems as it attempts to add increasing details and predictive precision. In contrast, a model-independence-focused strategy, which we call a ‘pluralistic strategy,’ draws off of Levins’s use of an assemblage of multiple, simple and—critically—independent models of ecological systems in order to do predictive and explanatory analysis. We use the example of model analysis of levee failure during Hurricane Katrina to show what a pluralistic strategy looks like in engineering. Depending on one’s strategy, one can deliberately engineer the set of available models in order to have more independent and complementary models that will be more likely to be accurate. We offer advice on ways of making models independent as well as a set of epistemic goals for model development that different models can emphasize.

Author(s):  
Marvin Zaluski ◽  
Sylvain Le´tourneau ◽  
Jeff Bird ◽  
Chunsheng Yang

The CF-18 aircraft is a complex system for which a variety of data are systematically being recorded: operational flight data from sensors and Built-In Test Equipment (BITE) and maintenance activities recorded by personnel. These data resources are stored and used within the operating organization but new analytical and statistical techniques and tools are being developed that could be applied to these data to benefit the organization. This paper investigates the utility of readily available CF-18 data to develop data mining-based models for prognostics and health management (PHM) systems. We introduce a generic data mining methodology developed to build prognostic models from operational and maintenance data and elaborate on challenges specific to the use of CF-18 data from the Canadian Forces. We focus on a number of key data mining tasks including: data gathering, information fusion, data pre-processing, model building, and evaluation. The solutions developed to address these tasks are described. A software tool developed to automate the model development process is also presented. Finally, the paper discusses preliminary results on the creation of models to predict F404 No. 4 Bearing and MFC (Main Fuel Control) failures on the CF-18.


2017 ◽  
Author(s):  
Piero Dalle Pezze ◽  
Nicolas Le Novère

AbstractBackground: The rapid growth of the number of mathematical models in Systems Biology fostered the development of many tools to simulate and analyse them. The reliability and precision of these tasks often depend on multiple repetitions and they can be optimised if executed as pipelines. In addition, new formal analyses can be performed on these repeat sequences, revealing important insights about the accuracy of model predictions.Results: Here we introduce SBpipe, an open source software tool for automating repetitive tasks in model building and simulation. Using basic configuration files, SBpipe builds a sequence of repeated model simulations or parameter estimations, performs analyses from this generated sequence, and finally generates a LaTeX/PDF report. The parameter estimation pipeline offers analyses of parameter profile likelihood and parameter correlation using samples from the computed estimates. Specific pipelines for scanning of one or two model parameters at the same time are also provided. Pipelines can run on multicore computers, Sun Grid Engine (SGE), or Load Sharing Facility (LSF) clusters, speeding up the processes of model building and simulation. SBpipe can execute models implemented in Copasi, Python or coded in any other programming language using Python as a wrapper module. Future support for other software simulators can be dynamically added without affecting the current implementation.Conclusions: SBpipe allows users to automatically repeat the tasks of model simulation and parameter estimation, and extract robustness information from these repeat sequences in a solid and consistent manner, facilitating model development and analysis. The source code and documentation of this project are freely available at the web site: https://pdp10.github.io/sbpipe/.


2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 1592-1592 ◽  
Author(s):  
Alyson L. Mahar ◽  
Susan Halabi ◽  
Lisa M. McShane ◽  
Patricia A. Groome ◽  
Carolyn C. Compton ◽  
...  

1592 Background: Clinical prediction in cancer depends on a myriad of prognostic factors, and relies on sound methodology for model building and validation. Increased understanding of complex tumour biology allows for simultaneous consideration of biological markers and standard clinical and pathological factors for prediction. We evaluated published studies supporting existing prediction tools in three cancers. Methods: Scientific literature and online resources were searched for clinical prediction tools for survival in three cancers: colorectal, lung, and melanoma. A priori criteria determined by the Molecular Modellers Working Group of the AJCC were evaluated and included: defined patient population, consideration of standard prognostic variables, model development approaches, validation strategies, performance metrics, presentation form of prediction tool, and intended clinical use. Results: Seventy-eight tools intended for prediction of survival were identified for the three cancers: 41 in colorectal, 23 in lung, and 14 in melanoma. Clinical presentations varied within each: 23 of the colorectal cancer tools focused on advanced disease with liver metastases and the remaining varied by stage; 16 lung cancer tools focused on NSCLC and 7 on SCLC. Even in narrowly defined situations, there was no consensus on key variables; for example, no variables were common to all 8 prediction tools for metastatic lung cancer. Variable definitions were missing or vague and the form of the model was often not provided, hampering independent validation and usability. Only 32/78 tools were supported by appropriate internal validity statistics and 21/78 with external validation. Often the development of risk scores did not create groups for whom treatment decisions would be similar. Conclusions: The quality of the literature supporting clinical prediction tools is variable, and the accuracy and utility of many existing tools is undetermined. Methodological guidelines for prediction tool development and validation should be adopted and adhered to. Studies developing and validating clinical prediction tools in cancer must be reported in complete and transparent fashion to facilitate proper interpretation and judgment of utility.


2018 ◽  
Author(s):  
Aidan C. Daly ◽  
Michael Clerx ◽  
Kylie A. Beattie ◽  
Jonathan Cooper ◽  
David J. Gavaghan ◽  
...  

AbstractThe modelling of the electrophysiology of cardiac cells is one of the most mature areas of systems biology. This extended concentration of research effort brings with it new challenges, foremost among which is that of choosing which of these models is most suitable for addressing a particular scientific question. In a previous paper, we presented our initial work in developing an online resource for the characterisation and comparison of electrophysiological cell models in a wide range of experimental scenarios. In that work, we described how we had developed a novel protocol language that allowed us to separate the details of the mathematical model (the majority of cardiac cell models take the form of ordinary differential equations) from the experimental protocol being simulated. We developed a fully-open online repository (which we termed the Cardiac Electrophysiology Web Lab) which allows users to store and compare the results of applying the same experimental protocol to competing models. In the current paper we describe the most recent and planned extensions of this work, focused on supporting the process of model building from experimental data. We outline the necessary work to develop a machine-readable language to describe the process of inferring parameters from wet lab datasets, and illustrate our approach through a detailed example of fitting a model of the hERG channel using experimental data. We conclude by discussing the future challenges in making further progress in this domain towards our goal of facilitating a fully reproducible approach to the development of cardiac cell models.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB191-WB207 ◽  
Author(s):  
Yaxun Tang ◽  
Biondo Biondi

We present a new strategy for efficient wave-equation migration-velocity analysis in complex geological settings. The proposed strategy has two main steps: simulating a new data set using an initial unfocused image and performing wavefield-based tomography using this data set. We demonstrated that the new data set can be synthesized by using generalized Born wavefield modeling for a specific target region where velocities are inaccurate. We also showed that the new data set can be much smaller than the original one because of the target-oriented modeling strategy, but it contains necessary velocity information for successful velocity analysis. These interesting features make this new data set suitable for target-oriented, fast and interactive velocity model-building. We demonstrate the performance of our method on both a synthetic data set and a field data set acquired from the Gulf of Mexico, where we update the subsalt velocity in a target-oriented fashion and obtain a subsalt image with improved continuities, signal-to-noise ratio and flattened angle-domain common-image gathers.


Author(s):  
Marvin Zaluski ◽  
Sylvain Létourneau ◽  
Jeff Bird ◽  
Chunsheng Yang

The CF-18 (CF denotes Canadian Forces) aircraft is a complex system for which a variety of data are systematically being recorded: flight data from sensors, built-in test equipment data, and maintenance data. Without proper analytical and statistical tools, these data resources are of limited use to the operating organization. Focusing on data mining-based modeling, this paper investigates the use of readily available CF-18 data to support the development of prognostics and health management systems. A generic data mining methodology has been developed to build prognostic models from operational and maintenance data. This paper introduces the methodology and elaborates on challenges specific to the use of CF-18 data from the Canadian Forces. A number of key data mining tasks are examined including data gathering, information fusion, data preprocessing, model building, and model evaluation. The solutions developed to address these tasks are described. A software tool developed to automate the model development process is also presented. Finally, this paper discusses preliminary results on the creation of models to predict F404 no. 4 bearing and main fuel control failures on the CF-18.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Manuel Pastor ◽  
José Carlos Gómez-Tamayo ◽  
Ferran Sanz

AbstractThis article describes Flame, an open source software for building predictive models and supporting their use in production environments. Flame is a web application with a web-based graphic interface, which can be used as a desktop application or installed in a server receiving requests from multiple users. Models can be built starting from any collection of biologically annotated chemical structures since the software supports structural normalization, molecular descriptor calculation, and machine learning model generation using predefined workflows. The model building workflow can be customized from the graphic interface, selecting the type of normalization, molecular descriptors, and machine learning algorithm to be used from a panel of state-of-the-art methods implemented natively. Moreover, Flame implements a mechanism allowing to extend its source code, adding unlimited model customization. Models generated with Flame can be easily exported, facilitating collaborative model development. All models are stored in a model repository supporting model versioning. Models are identified by unique model IDs and include detailed documentation formatted using widely accepted standards. The current version is the result of nearly 3 years of development in collaboration with users from the pharmaceutical industry within the IMI eTRANSAFE project, which aims, among other objectives, to develop high-quality predictive models based on shared legacy data for assessing the safety of drug candidates.


Author(s):  
I G.A. Anom Yudistira

This study aims to describe the various capabilities of the simmer package on R, especially in running a discrete event simulation model, then develop a DES simulation model building technique, which is effective and can represent real systems well, and explore the simulation output on this simmer, both in statistical summary form and parameter estimation. The method used in this research is the literature study, with descriptive and exploratory approaches. Model development is more effective when it is carried out starting from simple models, to more complex forms step by step, and describing the system using a flow chart. Replication for simulations is easy to perform, so as to get standard error values ​​for model parameter estimators. The stages in developing a discrete event simulation model with a simmer, start with compiling a simple flowchart to a more complex form, and replication is carried out. The simmer output in the form of data.frame makes it very easy to further process the output. The simple R API on simmer will also make it easier to simulate


Metals ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 43
Author(s):  
Vladimír Chmelko ◽  
Michal Harakaľ ◽  
Pavel Žlábek ◽  
Matúš Margetin ◽  
Róbert Ďurka

The fatigue life curves of materials are very sensitive to the magnitude of the stress amplitude. A small change or inaccuracy in the determination of the stress value causes large changes or inaccuracies in the calculated fatigue life estimate. Therefore, the use of computer simulations for fatigue life estimation requires a proper model development methodology. The paper is devoted to the problem of the modeling of components in notches using FEM. The modeling parameters significantly influencing the obtained stress results have been defined. Exact analytical solutions served as a benchmark for comparing the accuracy of the stress values obtained using FEM models. For the selected 2D and 3D notched components, diagrams were created for sensitivity analysis of the influence of the mesh element density at the root of the notch in correlation with the exact analytical solution. The findings from model building were applied to model the stress concentration at the root of a V-weld joint in a gas pipeline.


Sign in / Sign up

Export Citation Format

Share Document