Validating Dynamic Engineering Models Under Uncertainty

2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Zequn Wang ◽  
Yan Fu ◽  
Ren-Jye Yang ◽  
Saeed Barbat ◽  
Wei Chen

Validating dynamic engineering models is critically important in practical applications by assessing the agreement between simulation results and experimental observations. Though significant progresses have been made, the existing metrics lack the capability of managing uncertainty in both simulations and experiments. In addition, it is challenging to validate a dynamic model aggregately over both the time domain and a model input space with data at multiple validation sites. To overcome these difficulties, this paper presents an area-based metric to systemically handle uncertainty and validate computational models for dynamic systems over an input space by simultaneously integrating the information from multiple validation sites. To manage the complexity associated with a high-dimensional data space, eigenanalysis is performed for the time series data from simulations at each validation site to extract the important features. A truncated Karhunen–Loève (KL) expansion is then constructed to represent the responses of dynamic systems, resulting in a set of uncorrelated random coefficients with unit variance. With the development of a hierarchical data-fusion strategy, probability integral transform (PIT) is then employed to pool all the resulting random coefficients from multiple validation sites across the input space into a single aggregated metric. The dynamic model is thus validated by calculating the cumulative area difference of the cumulative density functions. The proposed model validation metric for dynamic systems is illustrated with a mathematical example, a supported beam problem with stochastic loads, and real data from the vehicle occupant-restraint system.

Author(s):  
Zequn Wang ◽  
Yan Fu ◽  
Ren-Jye Yang ◽  
Saeed Barbat ◽  
Wei Chen

Validating dynamic engineering models is critically important in practical applications by assessing the agreement between simulation results and experimental observations. Though significant progresses have been made, the existing metrics lack the capability of managing uncertainty in both simulations and experiments, which may stem from computer model instability, imperfection in material fabrication and manufacturing process, and variations in experimental conditions. In addition, it is challenging to validate a dynamic model aggregately over both the time domain and a model input space with data at multiple validation sites. To overcome these difficulties, this paper presents an area-based metric to systemically handle uncertainty and validate computational models for dynamic systems over an input space by simultaneously integrating the information from multiple validation sites. To manage the complexity associated with a high-dimensional data space, Eigen analysis is performed for the time series data from simulations at each validation site to extract the important features. A truncated Karhunen-Loève (KL) expansion is then constructed to represent the responses of dynamic systems, resulting in a set of uncorrelated random coefficients with unit variance. With the development of a hierarchical data fusion strategy, probability integral transform is then employed to pool all the resulting random coefficients from multiple validation sites across the input space into a single aggregated metric. The dynamic model is thus validated by calculating the cumulative area difference of the cumulative density functions. The proposed model validation metric for dynamic systems is illustrated with a mathematical example, a supported beam problem with stochastic loads, and real data from the vehicle occupant restraint system.


1997 ◽  
Vol 08 (06) ◽  
pp. 1345-1360 ◽  
Author(s):  
D. R. Kulkarni ◽  
J. C. Parikh ◽  
A. S. Pandya

A hybrid approach, incorporating concepts of nonlinear dynamics in artificial neural networks (ANN), is proposed to model a time series generated by complex dynamic systems. We introduce well-known features used in the study of dynamic systems — time delay τ and embedding dimension d — for ANN modeling of time series. These features provide a theoretical basis for selecting the optimal size for the number of neurons in the input layer. The main outcome of the new approach for such problems is that to a large extent it defines the ANN architecture, models the time series and gives good prediction. As a consequence, we have an integrated and systematic data-driven scheme for modeling time series data. We illustrate our method by considering computer generated periodic and chaotic time series. The ANN model developed gave excellent quality of fit for the training and test sets as well as for iterative dynamic predictions for future values of the two time series. Further, computer experiments were conducted by introducing Gaussian noise of various degrees in the two time series, to simulate real world effects. We find that up to a limit introduction of noise leads to a smaller network with good generalizing capability.


2020 ◽  
Vol 34 (6) ◽  
pp. 999-1016 ◽  
Author(s):  
Alexander F. Danvers ◽  
Richard Wundrack ◽  
Matthias Mehl

We provide a basic, step–by–step introduction to the core concepts and mathematical fundamentals of dynamic systems modelling through applying the Change as Outcome model, a simple dynamical systems model, to personality state data. This model characterizes changes in personality states with respect to equilibrium points, estimating attractors and their strength in time series data. Using data from the Personality and Interpersonal Roles study, we find that mean state is highly correlated with attractor position but weakly correlated with attractor strength, suggesting strength provides added information not captured by summaries of the distribution. We then discuss how taking a dynamic systems approach to personality states also entails a theoretical shift. Instead of emphasizing partitioning trait and state variance, dynamic systems analyses of personality states emphasize characterizing patterns generated by mutual, ongoing interactions. Change as Outcome modelling also allows for estimating nuanced effects of personality development after significant life changes, separating effects on characteristic states after the significant change and how strongly she or he is drawn towards those states (an aspect of resiliency). Estimating this model demonstrates core dynamics principles and provides quantitative grounding for measures of ‘repulsive’ personality states and ‘ambivert’ personality structures. © 2020 European Association of Personality Psychology


Author(s):  
KAZUHIRO ESAKI ◽  
MUNEO TAKAHASHI

There are two types of models for predicting software reliability at the end of testing. One is the software reliability growth model (dynamic model) based on a given set of time series data. The other is the software complexity model (static model) based on the development environmental factors which have an influence on the software reliability. As the dynamic model depends on the time factor and the test method used, its prediction accuracy does not necessarily correspond to the data of practical projects. On the other hand, the static model needs the many significant parameters to accurately predict the software reliability. However, it is very difficult to select the main factors that determine the significant parameters out of a great number of factors which affect software reliability. In order to resolve these problems, this paper proposes a model to predict the number of embedded errors in a program at the end of testing phase. This model is based on the testing characteristics such as error detection rate and test case density. The result of an experiment shows that the proposed model is more reliable than the conventional models.


Author(s):  
Praphula Jain ◽  
Mani Shankar Bajpai ◽  
Rajendra Pamula

Anomaly detection concerns identifying anomalous observations or patterns that are a deviation from the dataset's expected behaviour. The detection of anomalies has significant and practical applications in several industrial domains such as public health, finance, Information Technology (IT), security, medical, energy, and climate studies. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) Algorithm is a density-based clustering algorithm with the capability of identifying anomalous data. In this paper, a modified DBSCAN algorithm is proposed for anomaly detection in time-series data with seasonality. For experimental evaluation, a monthly temperature dataset was employed and the analysis set forth the advantages of the modified DBSCAN over the standard DBSCAN algorithm for the seasonal datasets. From the result analysis, we may conclude that DBSCAN is used for finding the anomalies in a dataset but fails to find local anomalies in seasonal data. The proposed Modified DBSCAN approach helps to find both the global and local anomalies from the seasonal data. Using normal DBSCAN we are able to get 19 (2.16%) anomaly points. While using the modified approach for DBSCAN we are able to get 42 (4.79%) anomaly points. In comparison we can say that we are able to get 2.11% more anomalies using the modified DBSCAN approach. Hence, the proposed Modified DBSCAN algorithm outperforms in comparison with the DBSCAN algorithm to find local anomalies.


2021 ◽  
Vol 118 (48) ◽  
pp. e2107794118
Author(s):  
Victor Chernozhukov ◽  
Kaspar Wüthrich ◽  
Yinchu Zhu

We propose a robust method for constructing conditionally valid prediction intervals based on models for conditional distributions such as quantile and distribution regression. Our approach can be applied to important prediction problems, including cross-sectional prediction, k–step-ahead forecasts, synthetic controls and counterfactual prediction, and individual treatment effects prediction. Our method exploits the probability integral transform and relies on permuting estimated ranks. Unlike regression residuals, ranks are independent of the predictors, allowing us to construct conditionally valid prediction intervals under heteroskedasticity. We establish approximate conditional validity under consistent estimation and provide approximate unconditional validity under model misspecification, under overfitting, and with time series data. We also propose a simple “shape” adjustment of our baseline method that yields optimal prediction intervals.


2020 ◽  
Author(s):  
Anil K. Palepu ◽  
Aditya Murali ◽  
Jenna L. Ballard ◽  
Robert Li ◽  
Samiksha Ramesh ◽  
...  

AbstractObjectivesTo predict short-term outcomes of critically ill patients with traumatic brain injury (TBI) by training machine learning classifiers on two large intensive care databasesDesignRetrospective analysis of observational data.PatientsPatients in the multicenter Philips eICU and single-center Medical Information Mart for Intensive Care–III (MIMIC-III) databases with a primary admission diagnosis of TBI, who were in intensive care for over 24 hours.InterventionsNone.Measurements and Main ResultsWe identified 1,689 and 126 qualifying TBI patients in eICU and MIMIC-III, respectively. Generalized Linear Models were used to predict mortality and neurological function at ICU discharge using features derived from clinical, laboratory, medication and physiological time series data obtained in the first 24 hours after ICU admission. Models were trained, tested and validated in eICU then validated externally in MIMIC-III. Model discrimination determined by area under the receiver operating characteristic curve (AUROC) analysis was 0.903 and 0.874 for mortality and neurological function, respectively. Performance was maintained when the models were tested in the independent MIMIC-III dataset (AUROC 0.958 and 0.878 for mortality and neurological function, respectively).ConclusionsComputational models trained with data available in the first 24 h after admission accurately predict discharge outcomes in ICU stratum TBI patients.


2021 ◽  
Author(s):  
Meshrif Alruily ◽  
Mohamed Ezz ◽  
Ayman Mohamed Mostafa ◽  
Nacim Yanes ◽  
Mostafa Abbas ◽  
...  

ABSTRACTAccurate forecasting of emerging infectious diseases can guide public health officials in making appropriate decisions related to the allocation of public health resources. Due to the exponential spread of the COVID-19 infection worldwide, several computational models for forecasting the transmission and mortality rates of COVID-19 have been proposed in the literature. To accelerate scientific and public health insights into the spread and impact of COVID-19, Google released the Google COVID-19 search trends symptoms open-access dataset. Our objective is to develop 7 and 14 -day-ahead forecasting models of COVID-19 transmission and mortality in the US using the Google search trends for COVID-19 related symptoms. Specifically, we propose a stacked long short-term memory (SLSTM) architecture for predicting COVID-19 confirmed and death cases using historical time series data combined with auxiliary time series data from the Google COVID-19 search trends symptoms dataset. Considering the SLSTM networks trained using historical data only as the base models, our base models for 7 and 14 -day-ahead forecasting of COVID cases had the mean absolute percentage error (MAPE) values of 6.6% and 8.8%, respectively. On the other side, our proposed models had improved MAPE values of 3.2% and 5.6%, respectively. For 7 and 14 -day-ahead forecasting of COVID-19 deaths, the MAPE values of the base models were 4.8% and 11.4%, while the improved MAPE values of our proposed models were 4.7% and 7.8%, respectively. We found that the Google search trends for “pneumonia,” “shortness of breath,” and “fever” are the most informative search trends for predicting COVID-19 transmission. We also found that the search trends for “hypoxia” and “fever” were the most informative trends for forecasting COVID-19 mortality.


2019 ◽  
Author(s):  
Alexander Francois Danvers ◽  
Richard Wundrack ◽  
Matthias R. Mehl

We provide a basic, step-by-step introduction to the core concepts and mathematical fundamentals of dynamic systems modeling through applying the Change as Outcome model, a simple dynamical systems model, to personality state data. This model characterizes changes in personality states with respect to equilibrium points, estimating attractors and their strength in time series data. Using data from the Personality and Interpersonal Roles (PAIRS) study, we find that mean state is highly correlated with attractor position but weakly correlated with attractor strength, suggesting strength provides added information not captured by summaries of the distribution. We then discuss how taking a dynamic systems approach to personality states also entails a theoretical shift. Instead of emphasizing partitioning trait and state variance, dynamic systems analyses of personality states emphasize characterizing patterns generated by mutual, ongoing interactions. Change as outcome modeling also allows for the effects of personality development after significant life changes to be conceptualized in more nuanced ways, separating effects on characteristic states after the significant change and how people are drawn towards those states (an aspect of resiliency). Estimating this model demonstrates core dynamics principles and provides quantitative grounding for measures of “repulsive” personality states and “ambivert” personality structures. Supplementary materials: https://osf.io/dps4w.


Sign in / Sign up

Export Citation Format

Share Document