Some Metrics and a Bayesian Procedure for Validating Predictive Models in Engineering Design

Author(s):  
Wei Chen ◽  
Ying Xiong ◽  
Kwok-Leung Tsui ◽  
Shuchun Wang

Even though model-based simulations are widely used in engineering design, it remains a challenge to validate models and assess the risks and uncertainties associated with the use of predictive models for design decision making. In most of the existing work, model validation is viewed as verifying the model accuracy, measured by the agreement between computational and experimental results. However, from the design perspective, a good model is considered as the one that can provide the discrimination (good resolution) between design candidates. In this work, a Bayesian approach is presented to assess the uncertainty in model prediction by combining data from both physical experiments and the computer model. Based on the uncertainty quantification of model prediction, some design-oriented model validation metrics are further developed to guide designers for achieving high confidence of using predictive models in making a specific design decision. We demonstrate that the Bayesian approach provides a flexible framework for drawing inferences for predictions in the intended but may be untested design domain, where design settings of physical experiments and the computer model may or may not overlap. The implications of the proposed validation metrics are studied, and their potential roles in a model validation procedure are highlighted.

2007 ◽  
Vol 130 (2) ◽  
Author(s):  
Wei Chen ◽  
Ying Xiong ◽  
Kwok-Leung Tsui ◽  
Shuchun Wang

In most of the existing work, model validation is viewed as verifying the model accuracy, measured by the agreement between computational and experimental results. Due to the lack of resource, accuracy can only be assessed at very limited test points. However, from the design perspective, a good model should be considered the one that can provide the discrimination (with good resolution) between competing design candidates under uncertainty. In this work, a design-driven validation approach is presented. By combining data from both physical experiments and the computer model, a Bayesian approach is employed to develop a prediction model as the replacement of the original computer model for the purpose of design. Based on the uncertainty quantification with the Bayesian prediction and, subsequently, that of a design objective, some decision validation metrics are further developed to assess the confidence of using the Bayesian prediction model in making a specific design choice. We demonstrate that the Bayesian approach provides a flexible framework for drawing inferences for predictions in the intended, but maybe untested, design domain. The applicability of the proposed decision validation metrics is examined for designs with either a discrete or continuous set of design alternatives. The approach is demonstrated through an illustrative example of a robust engine piston design.


Author(s):  
Byeng D. Youn ◽  
Byung C. Jung ◽  
Zhimin Xi ◽  
Sang Bum Kim

As the role of predictive models has increased, the fidelity of computational results has been of great concern to engineering decision makers. Often our limited understanding of complex systems leads to building inappropriate predictive models. To address a growing concern about the fidelity of the predictive models, this paper proposes a hierarchical model validation procedure with two validation activities: (1) validation planning (top-down) and (2) validation execution (bottom-up). In the validation planning, engineers define either the physics-of-failure (PoF) mechanisms or the system performances of interest. Then, the engineering system is decomposed into subsystems or components of which computer models are partially valid in terms of PoF mechanisms or system performances of interest. Validation planning will identify vital tests and predictive models along with both known and unknown model parameter(s). The validation execution takes a bottom-up approach, improving the fidelity of the computer model at any hierarchical level using a statistical calibration technique. This technique compares the observed test results with the predicted results from the computer model. A likelihood function is used for the comparison metric. In the statistical calibration, an optimization technique is employed to maximize the likelihood function while determining the unknown model parameters. As the predictive model at a lower hierarchy level becomes valid, the valid model is fused into a model at a higher hierarchy level. The validation execution is then continued for the model at the higher hierarchy level. A cellular phone is used to demonstrate the hierarchical validation of predictive models presented in this paper.


Author(s):  
George A. Hazelrigg

Models are the basis for all prediction of system behavior, and hence form a crucial element of engineering design. A key concern is the validity of such models. This paper discusses the notion of model validity and the limits of what one can say about the validity of a specific model. It is shown that predictive models, such as those used in engineering design, cannot be validated objectively. That is, the validation of a predictive model can be accomplished only in the context of a specific decision, and only in the context of subjective input from the decision maker, including preferences.


2011 ◽  
Vol 133 (7) ◽  
Author(s):  
Yu Liu ◽  
Wei Chen ◽  
Paul Arendt ◽  
Hong-Zhong Huang

Model validation metrics have been developed to provide a quantitative measure that characterizes the agreement between predictions and observations. In engineering design, the metrics become useful for model selection when alternative models are being considered. Additionally, the predictive capability of a computational model needs to be assessed before it is used in engineering analysis and design. Due to the various sources of uncertainties in both computer simulations and physical experiments, model validation must be conducted based on stochastic characteristics. Currently there is no unified validation metric that is widely accepted. In this paper, we present a classification of validation metrics based on their key characteristics along with a discussion of the desired features. Focusing on stochastic validation with the consideration of uncertainty in both predictions and physical experiments, four main types of metrics, namely classical hypothesis testing, Bayes factor, frequentist’s metric, and area metric, are examined to provide a better understanding of the pros and cons of each. Using mathematical examples, a set of numerical studies are designed to answer various research questions and study how sensitive these metrics are with respect to the experimental data size, the uncertainty from measurement error, and the uncertainty in unknown model parameters. The insight gained from this work provides useful guidelines for choosing the appropriate validation metric in engineering applications.


2008 ◽  
Vol 130 (4) ◽  
Author(s):  
Tiefu Shao ◽  
Sundar Krishnamurty

This paper addresses the critical issue of effectiveness and efficiency in simulation-based optimization using surrogate models as predictive models in engineering design. Specifically, it presents a novel clustering-based multilocation search (CMLS) procedure to iteratively improve the fidelity and efficacy of Kriging models in the context of design decisions. The application of this approach will overcome the potential drawback in surrogate-model-based design optimization, namely, the use of surrogate models may result in suboptimal solutions due to the possible smoothing out of the global optimal point if the sampling scheme fails to capture the critical points of interest with enough fidelity or clarity. The paper details how the problem of smoothing out the best (SOB) can remain unsolved in multimodal systems, even if a sequential model updating strategy has been employed, and lead to erroneous outcomes. Alternatively, to overcome the problem of SOB defect, this paper presents the CMLS method that uses a novel clustering-based methodical procedure to screen out distinct potential optimal points for subsequent model validation and updating from a design decision perspective. It is embedded within a genetic algorithm setup to capture the buried, transient, yet inherent data pattern in the design evolution based on the principles of data mining, which are then used to improve the overall performance and effectiveness of surrogate-model-based design optimization. Four illustrative case studies, including a 21bar truss problem, are detailed to demonstrate the application of the CMLS methodology and the results are discussed.


1999 ◽  
Vol 11 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Michael J. Scott ◽  
Erik K. Antonsson

1986 ◽  
Vol 71 ◽  
Author(s):  
I. Suni ◽  
M. Finetti ◽  
K. Grahn

AbstractA computer model based on the finite element method has been applied to evaluate the effect of the parasitic area between contact and diffusion edges on end resistance measurements in four terminal Kelvin resistor structures. The model is then applied to Al/Ti/n+ Si contacts and a value of contact resistivity of Qc = 1.8×10−7.Ωcm2 is derived. For comparison, the use of a self-aligned structure to avoid parasitic effects is presented and the first experimental results obtained on Al/Ti/n+Si and Al/CoSi2/n+Si contacts are shown and discussed.


1988 ◽  
Vol 32 (2) ◽  
pp. 168-172 ◽  
Author(s):  
Christopher D. Wickens ◽  
Kelly Harwood ◽  
Leon Segal ◽  
Inge Tkalcevic ◽  
Bill Sherman

The objective of this research was to establish the validity of predictive models of workload in the context of a controlled simulation of a helicopter flight mission. The models that were evaluated contain increasing levels of sophistication regarding their assumptions about the competition for processing resources underlying multiple task performance. Ten subjects performed the simulation which involved various combinations of a low level flight task with three cognitive side tasks, pertaining to navigation, spatial awareness and computation. Side task information was delivered auditorily or visually. Results indicated that subjective workload is best predicted by relatively simple models that simply integrate the total demands of tasks over time (r = 0.65). In contrast, performance is not well predicted by these models (r < .10), but is best predicted by models that assume differential competition between processing resources (r = 0.47). The relevance of these data to predictive models and to the use of subjective measures for model validation is discussed.


1984 ◽  
Vol 11 (3) ◽  
pp. 423-429 ◽  
Author(s):  
Malcolm J. S. Hirst

This paper presents the results of a parametric study into the thermal loading of concrete bridges by solar radiation. All results were obtained using a computer model calibrated from field measurements. The model computes the loading parameters from the bridge characteristics and the standard daily records of the weather bureau. The design method given uses an effective thickness concept to find the effects of a wearing course on the temperature profile of the underlying bridge. Thermal loading depends on climate and is extremely variable. Histograms are presented, which show the frequency distributions of the loading parameters for sample bridges at three Australian sites covering a range of climatic regimes from tropical to temperate. Key words: bridges, concrete, loads, temperature, solar radiation, structural engineering, design chart.


Sign in / Sign up

Export Citation Format

Share Document