The time to failure of fiber bundles subjected to random loads

1979 ◽  
Vol 11 (3) ◽  
pp. 527-541 ◽  
Author(s):  
Howard M. Taylor

The effect on cable reliability of random cyclic loading such as that generated by the wave-induced rocking of ocean vessels deploying these cables is examined. A simple model yielding exact formulas is first explored. In this model, the failure time of a single fiber under a constant load is assumed to be exponentially distributed, and the random loadings are a two-state stationary Markov process. The effect of load on failure time is assumed to follow a power law breakdown rule. In this setting, exact results concerning the distribution of bundle or cable failure time, and especially the mean failure time, are obtained. Where the fluctuations in load are frequent relative to bundle life, such as may occur in long-lived cables, it is shown that randomness in load tends to decrease mean bundle life, but it is suggested that the reduction in mean life often can be restored by modestly reducing the base load on the structure or by modestly increasing the number of elements in the bundle.In later pages this simple model is extended to cover a broader range of materials and random loadings. Asymptotic distributions and mean failure times are given where fibers follow a Weibull distribution of failure time under constant load, and loads that are general non-negative stationary processes subject only to some mild condition of asymptotic independence. When the power law breakdown exponent is large, the mean time to bundle failure depends heavily on the exact form of the marginal probability distribution for the random load process and cannot be summarized by the first two moments of this distribution alone.

1979 ◽  
Vol 11 (03) ◽  
pp. 527-541 ◽  
Author(s):  
Howard M. Taylor

The effect on cable reliability of random cyclic loading such as that generated by the wave-induced rocking of ocean vessels deploying these cables is examined. A simple model yielding exact formulas is first explored. In this model, the failure time of a single fiber under a constant load is assumed to be exponentially distributed, and the random loadings are a two-state stationary Markov process. The effect of load on failure time is assumed to follow a power law breakdown rule. In this setting, exact results concerning the distribution of bundle or cable failure time, and especially the mean failure time, are obtained. Where the fluctuations in load are frequent relative to bundle life, such as may occur in long-lived cables, it is shown that randomness in load tends to decrease mean bundle life, but it is suggested that the reduction in mean life often can be restored by modestly reducing the base load on the structure or by modestly increasing the number of elements in the bundle. In later pages this simple model is extended to cover a broader range of materials and random loadings. Asymptotic distributions and mean failure times are given where fibers follow a Weibull distribution of failure time under constant load, and loads that are general non-negative stationary processes subject only to some mild condition of asymptotic independence. When the power law breakdown exponent is large, the mean time to bundle failure depends heavily on the exact form of the marginal probability distribution for the random load process and cannot be summarized by the first two moments of this distribution alone.


1979 ◽  
Vol 11 (1) ◽  
pp. 153-187 ◽  
Author(s):  
S. Leigh Phoenix

A model is developed for the failure time of a bundle of fibers subjected to a constant load. At any time, all surviving fibers share the bundle load equally while all failed fibers support no load. The bundle may collapse immediately or fibers may fail randomly in time, possibly more than one at a time. The failure time of the bundle is the failure time of the last surviving fiber. For a single fiber, the c.d.f. for the failure time is assumed to be a specific functional of an arbitrary load history. The model is developed using a quantile process approach. In the most important case the failure time of the bundle is shown to be asymptotically normal with known parameters. The bundle failure model has the features of both static strength and fatigue failure of earlier analyses, and thus is more realistic than earlier models.


1979 ◽  
Vol 11 (01) ◽  
pp. 153-187 ◽  
Author(s):  
S. Leigh Phoenix

A model is developed for the failure time of a bundle of fibers subjected to a constant load. At any time, all surviving fibers share the bundle load equally while all failed fibers support no load. The bundle may collapse immediately or fibers may fail randomly in time, possibly more than one at a time. The failure time of the bundle is the failure time of the last surviving fiber. For a single fiber, the c.d.f. for the failure time is assumed to be a specific functional of an arbitrary load history. The model is developed using a quantile process approach. In the most important case the failure time of the bundle is shown to be asymptotically normal with known parameters. The bundle failure model has the features of both static strength and fatigue failure of earlier analyses, and thus is more realistic than earlier models.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 876
Author(s):  
Igor Gonçalves ◽  
Laécio Rodrigues ◽  
Francisco Airton Silva ◽  
Tuan Anh Nguyen ◽  
Dugki Min ◽  
...  

Surveillance monitoring systems are highly necessary, aiming to prevent many social problems in smart cities. The internet of things (IoT) nowadays offers a variety of technologies to capture and process massive and heterogeneous data. Due to the fact that (i) advanced analyses of video streams are performed on powerful recording devices; while (ii) surveillance monitoring services require high availability levels in the way that the service must remain connected, for example, to a connection network that offers higher speed than conventional connections; and that (iii) the trust-worthy dependability of a surveillance system depends on various factors, it is not easy to identify which components/devices in a system architecture have the most impact on the dependability for a specific surveillance system in smart cities. In this paper, we developed stochastic Petri net models for a surveillance monitoring system with regard to varying several parameters to obtain the highest dependability. Two main metrics of interest in the dependability of a surveillance system including reliability and availability were analyzed in a comprehensive manner. The analysis results show that the variation in the number of long-term evolution (LTE)-based stations contributes to a number of nines (#9s) increase in availability. The obtained results show that the variation of the mean time to failure (MTTF) of surveillance cameras exposes a high impact on the reliability of the system. The findings of this work have the potential of assisting system architects in planning more optimized systems in this field based on the proposed models.


2021 ◽  
Vol 58 (2) ◽  
pp. 289-313
Author(s):  
Ruhul Ali Khan ◽  
Dhrubasish Bhattacharyya ◽  
Murari Mitra

AbstractThe performance and effectiveness of an age replacement policy can be assessed by its mean time to failure (MTTF) function. We develop shock model theory in different scenarios for classes of life distributions based on the MTTF function where the probabilities $\bar{P}_k$ of surviving the first k shocks are assumed to have discrete DMTTF, IMTTF and IDMTTF properties. The cumulative damage model of A-Hameed and Proschan [1] is studied in this context and analogous results are established. Weak convergence and moment convergence issues within the IDMTTF class of life distributions are explored. The preservation of the IDMTTF property under some basic reliability operations is also investigated. Finally we show that the intersection of IDMRL and IDMTTF classes contains the BFR family and establish results outlining the positions of various non-monotonic ageing classes in the hierarchy.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Adam H de Havenon ◽  
Alexandra Kvernland ◽  
Alen Delic ◽  
Ka-ho Wong ◽  
Nazanin Sheibani ◽  
...  

Background: Recurrent stroke has higher morbidity and mortality than incident stroke. We evaluated hemodynamic risk factors for multiple recurrent strokes. Methods: We included patients in the SPS3 trial. The primary predictor was the top tertile, compared to the bottom tertile, of the mean systolic blood pressure (mSBP) and blood pressure variability represented as standard deviation (sdSBP) using blood pressures from day 30 of the trial to the end of follow-up. We excluded blood pressures from the first 30 days to reduce confounding from the trial’s intervention. We fit a logistic regression model to ≥2 recurrent strokes from day 30 to the end of follow-up and, to accurately analyze the multiple failure-time data, we ordered the multiple failure events to the Prentice, Williams and Peterson extension of the Cox proportional-hazards model. Results: We included 2,882 patients, of which 223 had a recurrent stroke and 41/223 had ≥2 recurrent strokes for a total of 272 strokes. The mean (SD) number of blood pressure readings was 78.0 (37.4). The etiology of the 272 strokes was 161 (59.2%) lacunar, 22 (8.1%) intracranial atherosclerosis, 10 (3.7%) extracranial atherosclerosis, 24 (8.8%) cardioembolic, and 55 (20.2%) cryptogenic or other. In both unadjusted and adjusted logistic regression models and PWP Cox models, the top tertile of sdSBP was consistently predictive of multiple recurrent strokes, while mSBP was not (Tables 1/2). Conclusions: We found that in patients with an index lacunar stroke, higher SBP variability, but not mean SBP, was predictive of multiple recurrent strokes of varying mechanisms.


1937 ◽  
Vol 10 (2) ◽  
pp. 224-230
Author(s):  
Milton L. Braun

Abstract When a suitable weight is supported by suspension from a piece of rubber, as a stationer's band, the rubber may be stretched any amount up to several times its original length, but its new length is not constant; it increases with time. The increase in length with time is variously known as “after-effect, #x201D; “creep,” “drift,” “flow,” or “time-yield.” This phenomenon, which seems to have been very incompletely investigated, was probably first recognized by Dietzel in 1857. Kohlrausch in 1875 made an entensive study of both torsional and linear after-effect in metal, glass, and rubber. The load used by Kohlrausch on rubber were, however, exceedingly small, and the duration of drift was limited to one day. He came to the conclusion that drift in rubber followed a power law for the first sixty minutes. During the next decade Pulfrich slightly extended the work of Kohlrausch by experimentation on a red rubber tube, using elongations up to 150 per cent, with maximum observation time of 15 days. He concluded that the power law of Kohlrausch held for at least thirty minutes of drift. In 1903, Bouaase and Carrière observed the after-effect (in pure gum and sulfur cords of 4 mm. diameter and of specific gravity 0.984) under a great variety of experiments, and concluded that drift was to be expressed by an exponential or logarithmic law rather than by a power function. Both Phillips and Schwartz arrived at similar conclusions. More recently Ariano reported that the drift proceeded at a decreasing rate which finally assumed a constant value either finite or zero. Van Geel and Eymers found that for milled rubber the drift continued until the specimens broke, but that for rubber obtained by evaporation of latex all after-effect ceased within three minutes. Shacklock noted that creep took place for some hours and then reached a limit. Evidently more light needs to be cast on the probem of the drift effect in rubber. Two preliminary experiments on drift are herein considered; Part I on general trends, and Part II on more specific analysis of the effect as observed in one specimen.


Author(s):  
G. Vijayalakshmi

With the increasing demand for high availability in safety-critical systems such as banking systems, military systems, nuclear systems, aircraft systems to mention a few, reliability analysis of distributed software/hardware systems continue to be the focus of most researchers. The reliability analysis of a homogeneous distributed software/hardware system (HDSHS) with k-out-of-n : G configuration and no load-sharing nodes is analyzed. However, in practice the system load is shared among the working nodes in a distributed system. In this paper, the dependability analysis of a HDSHS with load-sharing nodes is presented. This distributed system has a load-sharing k-out-of-(n + m) : G configuration. A Markov model for HDSHS is developed. The failure time distribution of the hardware is represented by the accelerated failure time model. The software faults are detected during software testing and removed upon failure. The Jelinski–Moranda software reliability model is used. The maintenance personal can repair the system up on both software and hardware failure. The dependability measures such as reliability, availability and mean time to failure are obtained. The effect of load-sharing hosts on system hazard function and system reliability is presented. Furthermore, an availability comparison of our results and the results in the literature is presented.


2002 ◽  
Vol 13 (06) ◽  
pp. 777-781
Author(s):  
DANIEL TIGGEMANN

In order to study fluctuations in percolating systems, lattices for sizes up to L = 100 000 have been simulated several thousand times using the Hoshen–Kopelman algorithm. Distributions of cluster numbers are Gaussians for small clusters and half-sided quasi-Gaussians for large clusters. The variance of cluster numbers is proportional to the mean, with power-law deviations for small clusters. Higher moments like skewness and kurtosis were also studied.


2002 ◽  
Vol 66 (23) ◽  
Author(s):  
Rogier Verberk ◽  
Antoine M. van Oijen ◽  
Michel Orrit

Sign in / Sign up

Export Citation Format

Share Document