scholarly journals System Reliability at the Crossroads

2012 ◽  
Vol 2012 ◽  
pp. 1-36 ◽  
Author(s):  
Vitali Volovoi

This paper surveys the current state of research related to the modeling and prediction of failures of engineering systems. It is argued that while greater understanding of the physics of failure has led to significant progress at the component level, there are significant challenges remaining at the system level. System reliability, a field of applied mathematics that addresses the latter challenges, is at a juncture where fundamental changes are likely. On the one hand, the traditional part of the field entered a phase of diminishing returns, largely having followed the trajectory of the Cold-War era technology development: golden years of rapid growth in the 1950s and 1960s, followed by maturation and slowing down in the ensuing decades. On the other hand, the convergence of several technologies related to data collection and processing, combined with important changes in engineering business and government priorities, has created the potential for a perfect storm that can revive and fundamentally transform the field; however, for this transformation to occur, some serious obstacles need to be overcome. The paper examines these obstacles along with several key areas of research that can provide enabling tools for this transformation.

2018 ◽  
Vol 140 (10) ◽  
Author(s):  
Zhen Hu ◽  
Zissimos P. Mourelatos

Testing of components at higher-than-nominal stress level provides an effective way of reducing the required testing effort for system reliability assessment. Due to various reasons, not all components are directly testable in practice. The missing information of untestable components poses significant challenges to the accurate evaluation of system reliability. This paper proposes a sequential accelerated life testing (SALT) design framework for system reliability assessment of systems with untestable components. In the proposed framework, system-level tests are employed in conjunction with component-level tests to effectively reduce the uncertainty in the system reliability evaluation. To minimize the number of system-level tests, which are much more expensive than the component-level tests, the accelerated life testing (ALT) design is performed sequentially. In each design cycle, testing resources are allocated to component-level or system-level tests according to the uncertainty analysis from system reliability evaluation. The component-level or system-level testing information obtained from the optimized testing plans is then aggregated to obtain the overall system reliability estimate using Bayesian methods. The aggregation of component-level and system-level testing information allows for an effective uncertainty reduction in the system reliability evaluation. Results of two numerical examples demonstrate the effectiveness of the proposed method.


Author(s):  
M. XIE ◽  
T.N. GOH

In this paper the problem of system-level reliability growth estimation using component-level failure data is studied. It is suggested that system failure data should be broken down into component, or subsystem, failure data when the above problems have occurred during the system testing phase. The proposed approach is especially useful when the system is not unchanged over the time, when some subsystems are improved more than others, or when the testing has been concentrated on different components at different time. These situations usually happen in practice and it may also be the case even if the system failure data is provided. Two sets of data are used to illustrate the simple approach; one is a set of component failure data for which all subsystems are available for testing at the same time and for the other set of data, the starting times are different for different subsystems.


1996 ◽  
Vol 33 (02) ◽  
pp. 548-556 ◽  
Author(s):  
Fan C. Meng

More applications of the principle for interchanging components due to Boland et al. (1989) in reliability theory are presented. In the context of active redundancy improvement we show that if two nodes are permutation equivalent then allocating a redundancy component to the weaker position always results in a larger increase in system reliability, which generalizes a previous result due to Boland et al. (1992). In the case of standby redundancy enhancement, we prove that a series (parallel) system is the only system for which standby redundancy at the component level is always more (less) effective than at the system level. Finally, the principle for interchanging components is extended from binary systems to the more complicated multistate systems.


Author(s):  
JOSE E. RAMIREZ-MARQUEZ ◽  
DAVID W. COIT ◽  
TONGDAN JIN

A new methodology is presented to allocate testing units to the different components within a system when the system configuration is fixed and there are budgetary constraints limiting the amount of testing. The objective is to allocate additional testing units so that the variance of the system reliability estimate, at the conclusion of testing, will be minimized. Testing at the component-level decreases the variance of the component reliability estimate, which then decreases the system reliability estimate variance. The difficulty is to decide which components to test given the system-level implications of component reliability estimation. The results are enlightening because the components that most directly affect the system reliability estimation variance are often not those components with the highest initial uncertainty. The approach presented here can be applied to any system structure that can be decomposed into a series-parallel or parallel-series system with independent component reliability estimates. It is demonstrated using a series-parallel system as an example. The planned testing is to be allocated and conducted iteratively in distinct sequential testing runs so that the component and system reliability estimates improve as the overall testing progresses. For each run, a nonlinear programming problem must be solved based on the results of all previous runs. The testing allocation process is demonstrated on two examples.


2006 ◽  
Vol 27 (3) ◽  
pp. 1031-1044
Author(s):  
S D Snyman

The identity of the three figures mentioned in Malachi 3:1 remains an intriguing question for scholars. In this article an overview of the current state of research on this problem is given highlighting the strengths and weaknesses of the different solutions while yet another proposal is made adding some new arguments to existing answers. An overview on the history of research done on this problem can be categorised into three groups: the three figures refer to three different personalities or they all refer to the same person or they refer to two different persons. The conclusion reached is that the three figures mentioned are references to two persons, the one human and the other divine.  The messenger  is identified as the prophet Malachi. 


2019 ◽  
Vol 142 (3) ◽  
Author(s):  
Kassem Moustafa ◽  
Zhen Hu ◽  
Zissimos P. Mourelatos ◽  
Igor Baseski ◽  
Monica Majcher

Abstract Accelerated life test (ALT) has been widely used to accelerate the product reliability assessment process by testing a product at higher than nominal stress conditions. For a system with multiple components, the tests can be performed at component-level or system-level. The data at these two levels require different amount of resources to collect and carry different values of information for system reliability assessment. Even though component-level tests are cheap to perform, they cannot account for the correlations between the failure time distributions of different components. While system-level tests can naturally account for the complicated dependence between component failure time distributions, the required testing efforts are much higher than that of component-level tests. This research proposes a novel resource allocation framework for ALT-based system reliability assessment. A physics-informed load model is first employed to bridge the gap between component-level tests and system-level tests. An optimization framework is then developed to effectively allocate testing resources to different types of tests. The information fusion of component-level and system-level tests allows us to accurately estimate the system reliability with a minimized requirement on the testing resources. Results of two numerical examples demonstrate the effectiveness of the proposed framework.


2015 ◽  
Vol 12 (1) ◽  
pp. 161-184 ◽  
Author(s):  
Zapata Cruz ◽  
José Fernández-Alemán ◽  
Ambrosio Toval

A number of cloud applications are currently widely used. However, one of the main reasons for the slowing down in the growth of cloud computing is that of security. Even though some research has been done in the security field, it is necessary to assess the current state of research and practice. This paper aims for the study of the existing research about security in cloud computing to analyze the state of art and to identify future directions. The method selected to investigate the security in cloud computing is a systematic mapping study. A total of 344 papers were selected and classified by security goal, research type and contribution type. The main security specific issues extracted are data protection (30.29%), access management (20.14%), software isolation (16.7%), availability (16%), trust (13.6%) and governance (3.27%). Our results demonstrate that cloud computing seems to be a promising area for security research and evaluation.


Author(s):  
Torsten Hauck ◽  
Vibhash Jha ◽  
J. H. J. Janssen

The development of complex electronic modules requires very efficient simulation technique for faster design and optimization process. For smaller component level models, current state of the art FEM/CFD tools is sufficient if appropriate boundary conditions are used. But for larger system level models, this approach can be computationally expensive as the finite element model can lead to very large set of equations. Hence, there is a need for much efficient computational methods such as model order reduction (MOR). MOR was developed to study property of dynamical systems to reduce their complexity without changing input/output to the system.


2015 ◽  
Vol 9 (1) ◽  
pp. 43-67
Author(s):  
Yuval Feldman ◽  
Tami Kricheli-Katz

Abstract The paper highlights how our knowledge about the manner the human mind works and people behave in social interactions may contribute to our understanding of employment discrimination and provide effective ways to address it. It calls for a rigorous empirical study of the mechanisms generating different forms of discrimination against disadvantaged groups and the implications that follow for law and policy. The paper’s focus is theoretical, criticizing the current state of research on employment discrimination and calling for an integrative approach to the research in this area. In particular, the paper criticizes the lack of mutual communication among the various disciplines that study discrimination. Over-reliance on one type of methodology limits scholars’ ability to address nuances of most discriminatory settings. We criticize the “one policy fits all” approach, in which discrimination against all types of disadvantaged groups is viewed as capturing all types of discrimination as well as the lack of truly accounting for the interplay between deliberative and automatic modes of reasoning. The paper suggests that adopting an integrative perspective would raise awareness among policy-makers and employers to variations in the effects of social categories on hiring, promotion, and firing practices.


Sign in / Sign up

Export Citation Format

Share Document