Reliability Analysis With Model Uncertainty Coupling With Parameter and Experiment Uncertainties: A Case Studyof 2014 Verification and Validation Challenge Problem

Author(s):  
Zhimin Xi ◽  
Ren-Jye Yang

A validation strategy with copula-based bias approximation approach is proposed to address the 2014 Verification and Validation (V & V) challenge problem developed by the Sandia National Laboratory. The proposed work further incorporates model uncertainty into reliability analysis. Specific issues have been addressed including: (i) uncertainty modeling of model parameters using the Bayesian approach, (ii) uncertainty quantification (UQ) of model outputs using the eigenvector dimension reduction (EDR) method, (iii) model bias calibration with the U-pooling metric, (iv) model bias approximation using the copula-based approach, and (v) reliability analysis considering the model uncertainty. The proposed work is well demonstrated in the challenge problem.

2019 ◽  
Vol 141 (5) ◽  
Author(s):  
Zhimin Xi

Model-based reliability analysis may not be practically useful if reliability estimation contains uncontrollable errors. This paper addresses potential reliability estimation errors from model bias together with model parameters. Given three representative scenarios, reliability analysis strategies with representative methods are proposed. The pros and cons of these strategies are discussed and demonstrated using a tank storage problem based on the finite element model with different fidelity levels. It is found in this paper that the confidence-based reliability analysis considering epistemic uncertainty modeling for both model bias and model parameters can make reliability estimation errors controllable with less conservativeness compared to the direct reliability modeling using the Bayesian approach.


Author(s):  
Sheng-Jia Ruan ◽  
Yan-Hui Lin

Standby redundancy can meet system safety requirements in industries with high reliability standards. To evaluate reliability of standby systems, failure dependency among components has to be considered especially when systems have load-sharing characteristics. In this paper, a reliability analysis and state transfer scheduling optimization framework is proposed for the load-sharing 1-out-of- N: G system equipped with M warm standby components and subject to continuous degradation process. First, the system reliability function considering multiple dependent components is derived in a recursive way. Then, a Monte Carlo method is developed and the closed Newton-Cotes quadrature rule is invoked for the system reliability quantification. Besides, likelihood functions are constructed based on the measurement information to estimate the model parameters of both active and standby components, whose degradation paths are modeled by the step-wise drifted Wiener processes. Finally, the system state transfer scheduling is optimized by the genetic algorithm to maximize the system reliability at mission time. The proposed methodology and its effectiveness are illustrated through a case study referring to a simplified aircraft hydraulic system.


Computation ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 54 ◽  
Author(s):  
Senthil Raman ◽  
Heuy Kim

A centrifugal compressor working with supercritical CO 2 (S-CO 2 ) has several advantages over other supercritical and conventional compressors. S-CO 2 is as dense as the liquid CO 2 and becomes difficult to compress. Thus, during the operation, the S-CO 2 centrifugal compressor requires lesser compression work than the gaseous CO 2 . The performance of S-CO 2 compressors is highly varying with tip clearance and vanes in the diffuser. To improve the performance of the S-CO 2 centrifugal compressor, knowledge about the influence of individual components on the performance characteristics is necessary. This present study considers an S-CO 2 compressor designed with traditional engineering design tools based on ideal gas behaviour and tested by SANDIA national laboratory. Three-dimensional, steady, viscous flow through the S-CO 2 compressor was analysed with computational fluid dynamics solver based on the finite volume method. Navier-Stokes equations are solved with K- ω (SST) turbulence model at operating conditions in the supercritical regime. Performance of the impeller, the main component of the centrifugal compressor is compared with the impeller with vaneless diffuser and vaned diffuser configurations. The flow characteristics of the shrouded impeller are also studied to analyse the tip-leakage effect.


2020 ◽  
Vol 76 (10) ◽  
pp. 912-925
Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

Density modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here, the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows the incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between the macromolecule and solvent are visible, and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here, model bias is reduced by using ensemble models that allow an estimation of model uncertainty. A test of model bias is presented that suggests that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.


2017 ◽  
Vol 46 (5) ◽  
pp. 805-825 ◽  
Author(s):  
Li Wan ◽  
Ying Jin

Robust calibration and validation of applied urban models are prerequisites for their successful, policy-cogent use. This is particularly important today when expert assessment is questioned and closely scrutinized. This paper proposes a new model calibration-validation strategy based on a spatial equilibrium model that incorporates multiple time horizons, such that the predictive capabilities of the model can be empirically tested. The model is implemented for the Greater Beijing city region and the model validation strategy is demonstrated over the Census years 2000 to 2010. Through forward/backward forecasting, the model validation helps to verify the stability of the model parameters as well as the predictive capabilities of the recursive equilibrium framework. The proposed modelling strategy sets a new standard for verifying and validating recursive equilibrium models. We also consider the wider implications of the approach.


2005 ◽  
Vol 128 (3) ◽  
pp. 626-635 ◽  
Author(s):  
Gregory D. Buckner ◽  
Heeju Choi ◽  
Nathan S. Gibson

Robust control techniques require a dynamic model of the plant and bounds on model uncertainty to formulate control laws with guaranteed stability. Although techniques for modeling dynamic systems and estimating model parameters are well established, very few procedures exist for estimating uncertainty bounds. In the case of H∞ control synthesis, a conservative weighting function for model uncertainty is usually chosen to ensure closed-loop stability over the entire operating space. The primary drawback of this conservative, “hard computing” approach is reduced performance. This paper demonstrates a novel “soft computing” approach to estimate bounds of model uncertainty resulting from parameter variations, unmodeled dynamics, and nondeterministic processes in dynamic plants. This approach uses confidence interval networks (CINs), radial basis function networks trained using asymmetric bilinear error cost functions, to estimate confidence intervals associated with nominal models for robust control synthesis. This research couples the “hard computing” features of H∞ control with the “soft computing” characteristics of intelligent system identification, and realizes the combined advantages of both. Simulations and experimental demonstrations conducted on an active magnetic bearing test rig confirm these capabilities.


Author(s):  
Kevin Otto ◽  
Clas Jacobson

Verifying and validating that a mechanical system meets the design requirements is often a costly iterative activity. This is particularity true, for example, with complex vehicle systems that must meet noise and vibration requirements to ensure vehicle occupant comfort. We show here how analysis of model uncertainty can speed verification testing by bounding and guiding hardware prototype redesign. Vibration and acoustic model uncertainty and residual errors are estimated, and then analysis derived to ensure that this uncertainty range is covered by a planned set of design changes. We further use these results to define a complexity metric based on uncertainty, and an adaptability metric based on the domain of available adjustment. We then propose a capability metric by comparing the range of uncertainty against the range of adaptability provided. We demonstrate the efficacy with an example from the elevator system design, rapidly meeting noise and vibration requirements with only one prototype iteration.


Author(s):  
David Riha ◽  
Joseph Hassan ◽  
Marlon Forrest ◽  
Ke Ding

This paper describes the development of a mathematical model capable of providing realistic simulations of vehicle crashes by accounting for uncertainty in the model input parameters. The approach taken was to couple advanced and efficient probabilistic and reliability analysis methods with well-established, high fidelity finite element and occupant modeling software. Southwest Research Institute has developed probabilistic analysis software called NESSUS. This code was used as the framework for a stochastic crashworthiness FE model. The LS-DYNA finite element model of vehicle frontal offset impact and the MADYMO model of a 50th percentile male Hybrid III dummy were integrated with NESSUS to comprise the crashworthiness characteristics. The system reliability of the vehicle is computed by defining ten acceptance criteria performance functions; four occupant injury criteria and six compartment intrusion criteria. The reliability for each acceptance criteria was computed using NESSUS to identify the dominant acceptance criteria of the original design. The femur axial load acceptance criteria event has the lowest reliability (46%) followed by the HIC event (58%) and the door aperture closure event (73%). One approach to improve the reliability is to change vehicle parameters to improve the reliability for the dominant criteria. However, a parameter change such as vehicle strength/stiffness may have a beneficial effect on certain acceptance criteria but be detrimental to others. A system reliability analysis was used to include the contribution of all acceptance criteria to correctly quantify the vehicle reliability and identify important parameters. A redesign analysis was performed using the computed probabilistic sensitivity factors. These sensitivities were used to identify the most effective changes in model parameters to improve the reliability. A redesign using 11 design modifications was performed that increased the original reliability from 23% to 86%. Several of the design changes include increasing the rail material yield strength and reducing its variation, reducing the variation of the bumper and rail installation tolerances, and increasing the rail weld stiffness and reducing its variation. The results show that major reliability improvements for occupant injury and compartment intrusion can be realized by certain specific modifications to the model input parameters. A traditional (deterministic) method of analysis would not have suggested these modifications.


Sign in / Sign up

Export Citation Format

Share Document