An Integrated Performance Measure Approach for System Reliability Analysis

2015 ◽  
Vol 137 (2) ◽  
Author(s):  
Zequn Wang ◽  
Pingfeng Wang

This paper presents a new adaptive sampling approach based on a novel integrated performance measure approach, referred to as “iPMA,” for system reliability assessment with multiple dependent failure events. The developed approach employs Gaussian process (GP) regression to construct surrogate models for each component failure event, thereby enables system reliability estimations directly using Monte Carlo simulation (MCS) based on surrogate models. To adaptively improve the accuracy of the surrogate models for approximating system reliability, an iPM, which envelopes all component level failure events, is developed to identify the most useful sample points iteratively. The developed iPM possesses three important properties. First, it represents exact system level joint failure events. Second, the iPM is mathematically a smooth function “almost everywhere.” Third, weights used to reflect the importance of multiple component failure modes can be adaptively learned in the iPM. With the weights updating process, priorities can be adaptively placed on critical failure events during the updating process of surrogate models. Based on the developed iPM with these three properties, the maximum confidence enhancement (MCE) based sequential sampling rule can be adopted to identify the most useful sample points and improve the accuracy of surrogate models iteratively for system reliability approximation. Two case studies are used to demonstrate the effectiveness of system reliability assessment using the developed iPMA methodology.

Author(s):  
Zequn Wang ◽  
Pingfeng Wang

This paper presents an integrated performance measure approach (iPMA) for system reliability assessment considering multiple dependent failure modes. An integrated performance function is developed to envelope all component level failure events, thereby enables system reliability approximation by considering only one integrated system limit state. The developed integrated performance function possesses two critical properties. First, it represents exact joint failure surface defined by multiple component failure events, thus no error will be induced due to the integrated limit state function in system reliability computation. Second, smoothness of the integrated performance on system failure surface can be guaranteed, therefore advanced response surface techniques can be conveniently employed for response approximation. With the developed integrated performance function, the maximum confidence enhancement based sequential sampling method is adopted as an efficient component reliability analysis tool for system reliability approximation. To furthermore improve the computational efficiency, a new constraint filtering technique is developed to adaptively identify active limit states during the iterative sampling process without inducing any extra computational cost. One case study is used to demonstrate the effectiveness of system reliability assessment using the developed iPMA methodology.


2018 ◽  
Vol 140 (10) ◽  
Author(s):  
Zhen Hu ◽  
Zissimos P. Mourelatos

Testing of components at higher-than-nominal stress level provides an effective way of reducing the required testing effort for system reliability assessment. Due to various reasons, not all components are directly testable in practice. The missing information of untestable components poses significant challenges to the accurate evaluation of system reliability. This paper proposes a sequential accelerated life testing (SALT) design framework for system reliability assessment of systems with untestable components. In the proposed framework, system-level tests are employed in conjunction with component-level tests to effectively reduce the uncertainty in the system reliability evaluation. To minimize the number of system-level tests, which are much more expensive than the component-level tests, the accelerated life testing (ALT) design is performed sequentially. In each design cycle, testing resources are allocated to component-level or system-level tests according to the uncertainty analysis from system reliability evaluation. The component-level or system-level testing information obtained from the optimized testing plans is then aggregated to obtain the overall system reliability estimate using Bayesian methods. The aggregation of component-level and system-level testing information allows for an effective uncertainty reduction in the system reliability evaluation. Results of two numerical examples demonstrate the effectiveness of the proposed method.


Author(s):  
M. XIE ◽  
T.N. GOH

In this paper the problem of system-level reliability growth estimation using component-level failure data is studied. It is suggested that system failure data should be broken down into component, or subsystem, failure data when the above problems have occurred during the system testing phase. The proposed approach is especially useful when the system is not unchanged over the time, when some subsystems are improved more than others, or when the testing has been concentrated on different components at different time. These situations usually happen in practice and it may also be the case even if the system failure data is provided. Two sets of data are used to illustrate the simple approach; one is a set of component failure data for which all subsystems are available for testing at the same time and for the other set of data, the starting times are different for different subsystems.


2008 ◽  
Vol 5 (1) ◽  
pp. 12-25 ◽  
Author(s):  
Injoong Kim ◽  
Raghuram V. Pucha ◽  
Russell S. Peak ◽  
Suresh K. Sitaraman

Design-for-reliability of complex systems involves top-down reliability allocation approaches, reliability prediction of both random and wear-out failures, and bottom-up reliability assessment approaches to provide more insight into the system-level reliability. Designing complex microelectronic systems, while considering reliability in the early design stages, is a challenge because these systems have multilevel structure and logical groups, and numerous components are associated with failure modes and mechanisms. To address these difficulties and to design reliable systems in a systematic way, reliability allocation and reliability assessment algorithms and associated reliability predictions methodologies are presented in this paper in the context of a System-Design-for-Reliability (SDfR) framework. Reliability allocation algorithms are presented for both parallel and series systems that calculate the target reliability of subsystems from the given target reliability of their parent systems. The reliability allocation algorithm is demonstrated for random failures in a video broadcasting system that consists of a four-level packaging structure. The reliability assessment algorithm is demonstrated for wear-out failures in a USB board system that consists of multiple logical groups and various failure modes and mechanisms. The reliability assessment algorithms also demonstrate the use of physics-based reliability prediction of each logical group before assessing the system reliability. The demonstrated results show that the algorithms are useful for determining system configurations and design parameters. Such design changes will reduce the burden of downstream reliability activities.


2019 ◽  
Vol 142 (3) ◽  
Author(s):  
Kassem Moustafa ◽  
Zhen Hu ◽  
Zissimos P. Mourelatos ◽  
Igor Baseski ◽  
Monica Majcher

Abstract Accelerated life test (ALT) has been widely used to accelerate the product reliability assessment process by testing a product at higher than nominal stress conditions. For a system with multiple components, the tests can be performed at component-level or system-level. The data at these two levels require different amount of resources to collect and carry different values of information for system reliability assessment. Even though component-level tests are cheap to perform, they cannot account for the correlations between the failure time distributions of different components. While system-level tests can naturally account for the complicated dependence between component failure time distributions, the required testing efforts are much higher than that of component-level tests. This research proposes a novel resource allocation framework for ALT-based system reliability assessment. A physics-informed load model is first employed to bridge the gap between component-level tests and system-level tests. An optimization framework is then developed to effectively allocate testing resources to different types of tests. The information fusion of component-level and system-level tests allows us to accurately estimate the system reliability with a minimized requirement on the testing resources. Results of two numerical examples demonstrate the effectiveness of the proposed framework.


Author(s):  
Ming Yang ◽  
Zhijian Zhang ◽  
Jie Liu ◽  
Shengyuan Yan

Fault Tree Analysis (FTA) is a powerful analytical technique for analyzing system reliability and safety by enumerating any possible safety-critical failure modes, which is very useful for identifying the risks and weaknesses in the system. Therefore, FTA is widely applied to the safety evaluation of large-scale and mission-critical systems. However, the following problems are usually pointed out when building a fault tree for a complex system: 1) System modeling is a hard and time consuming work, and 2) FTA models are difficult to be validated. In this paper, we propose a new method for system reliability analysis based on Multilevel Flow Models (MFM) and Goal Tree-Success Tree (GTST) methods. We use Goal Tree (GT) methodology to model the target system at a higher and system level, and use the Success Tree (ST) together with MFM at a lower and functional level. In this way, modeling effort could be significantly reduced. In this paper, an algorithm is also presented to translate the GTST-MFM model into ST model based on which qualitative reliability analysis can be performed by the Fusell-vesely algorithm. In this paper, a Low Head Safety Injection System (LHSIS) is taken as a case study to exemplify how to apply our proposed GTST-MFM method to model the system and to validate fault trees directly built by deductive method.


2013 ◽  
Vol 694-697 ◽  
pp. 868-871
Author(s):  
Jun Zhang ◽  
Bing Zhang

In order to reduce the influence of uncertainties on complicated engineering systems performance, a new method is proposed based on the performance measure approach and collaborative optimization (PMA-CO) to implement the reliability-based multidisciplinary design optimization of gear transmission. Both the mathematical model and procedures of PMA-CO are presented. With the adoption of slack factors in the system-level of collaborative optimization, both CO and PMA-CO are applied to the optimization of gear transmission. The proposed PMA-CO improves the reliability of the gear transmission and gained a tradeoff solution between design cost and reliability. Therefore, the PMA-CO is effective and practical in engineering design.


Author(s):  
Jingyi Liu ◽  
Yugang Zhang ◽  
Bifeng Song

This paper establishes a competing failure analysis model for complex mechanical system under component failure and performance failure considering degradation. Traditionally, mechanical system is composed by a number of components. Meanwhile, mechanical system has the ability to accomplish its specific performance. Therefore, mechanism may fail because of two kinds of failure modes, the component failure due to degradation (such as component wear) and the performance failure (system couldn't complete performance). The two failure modes compete with each other because as soon as one mode occurs the system just fails. The component will degrade with time as system operates as well as the system performance. In this paper, Brownian motion (BM) with nonlinear drift is used to model the degradation of components based on which component failure is analyzed. The function of performance measurement is built by surrogate and performance failure is analyzed by it at different working circulation. Farlie–Gumbel–Morgenstern (FGM) copula is introduced to describe the dependence. The system reliability is analyzed by FGM copula as well as competing failure probability for each failure mode. Finally, a numerical example and an engineering case study are used to illustrate the proposed model.


Author(s):  
Ali Kaveh ◽  
Kiarash Biabani Hamedani ◽  
Mohammad Kamalinejad

In this paper, recently developed set theoretical variants of the teaching-learning-based optimization (TLBO) algorithm and the shuffled shepherd optimization algorithm (SSOA) are employed for system reliability-based design optimization (SRBDO) of truss structures. The set theoretical variants are designed based on a simple framework in which the population of candidate solutions is divided into some number of smaller well-arranged sub-populations. In addition, the framework is applied to the Jaya algorithm, leading to a set-theoretical variant of the Jaya algorithm. So far, most of the reliability-based design optimization studies have focused on the reliability of single structural members. This is due to the fact that the optimization problems with system reliability-based constraints are computationally expensive to solve. This is especially the case of statically redundant structures, where the number of failure modes is so high that it is impractical to identify all of them. System-level reliability analysis of truss structures is carried out by the branch and bound method by which the stochastically dominant failure paths are identified within a reasonable time. At last, three numerical examples, including size optimization of truss structures, are presented to illustrate the effectiveness of the proposed SRBDO approach. The results indicate the efficiency and applicability of the set theoretical optimization algorithms to solve the SRBDO problems of truss structures.


Sign in / Sign up

Export Citation Format

Share Document