scholarly journals A WEB-BASED SOFTWARE FOR THE CALCULATION OF THEORETICAL PROBABILITY DISTRIBUTIONS

Author(s):  
Fatma Hilal YAĞIN ◽  
Emek GÜLDOĞAN ◽  
Cemil ÇOLAK
2019 ◽  
Vol 26 (2) ◽  
pp. 290-310 ◽  
Author(s):  
Balaraju Jakkula ◽  
Govinda Raj M. ◽  
Murthy Ch.S.N.

Purpose Load haul dumper (LHD) is one of the main ore transporting machineries used in underground mining industry. Reliability of LHD is very significant to achieve the expected targets of production. The performance of the equipment should be maintained at its highest level to fulfill the targets. This can be accomplished only by reducing the sudden breakdowns of component/subsystems in a complex system. The identification of defective component/subsystems can be possible by performing the downtime analysis. Hence, it is very important to develop the proper maintenance strategies for replacement or repair actions of the defective ones. Suitable maintenance management actions improve the performance of the equipment. This paper aims to discuss this issue. Design/methodology/approach Reliability analysis (renewal approach) has been used to analyze the performance of LHD machine. Allocations of best-fit distribution of data sets were made by the utilization of Kolmogorov–Smirnov (K–S) test. Parametric estimation of theoretical probability distributions was made by utilizing the maximum likelihood estimate (MLE) method. Findings Independent and identical distribution (IID) assumption of data sets was validated through trend and serial correlation tests. On the basis of test results, the data sets are in accordance with IID assumption. Therefore, renewal process approach has been utilized for further investigation. Allocations of best-fit distribution of data sets were made by the utilization of Kolmogorov–Smirnov (K–S) test. Parametric estimation of theoretical probability distributions was made by utilizing the MLE method. Reliability of each individual subsystem has been computed according to the best-fit distribution. In respect of obtained reliability results, the reliability-based preventive maintenance (PM) time schedules were calculated for the expected 90 percent reliability level. Research limitations/implications As the reliability analysis is one of the complex techniques, it requires strategic decision making knowledge for the selection of methodology to be used. As the present case study was from a public sector company, operating under financial constraints the conclusions/findings may not be universally applicable. Originality/value The present study throws light on this equipment that need a tailored maintenance schedule, partly due to the peculiar mining conditions, under which they operate. This study mainly focuses on estimating the performance of four numbers of well-mechanized LHD systems with reliability, availability and maintainability (RAM) modeling. Based on the drawn results, reasons for performance drop of each machine were identified. Suitable recommendations were suggested for the enhancement of performance of capital intensive production equipment. As the maintenance management is only the means for performance improvement of the machinery, PM time intervals were estimated with respect to the expected rate of reliability level.


2004 ◽  
Vol 4 (3) ◽  
pp. 218-225 ◽  
Author(s):  
David P. Dupplaw ◽  
David Brunson ◽  
Anna-Jane E. Vine ◽  
Colin P. Please ◽  
Susan M. Lewis ◽  
...  

When planning experiments to examine how product performance depends on the design, manufacture and environment of use, there are invariably too few resources to enable a complete investigation of all possible variables (factors). We have developed new algorithms for generating and assessing efficient two-stage group screening strategies which are implemented through a web-based system called GISEL. This system elicits company knowledge which is used to guide the formulation of competing two-stage strategies and, via the algorithms, to provide quantitative assessment of their efficiencies. The two-stage group screening method investigates the effect of a large number of factors by grouping them in a first stage experiment whose results identify factors to be further investigated in a second stage. Central to the success of the procedure is ensuring that the factors considered, and their grouping, are based on the best available knowledge of the product. The web-based software system allows information and ideas to be contributed by engineers at different sites and allows the experiment organizer to use these expert opinions to guide decisions on the planning of group screening experiments. The new group screening algorithms implemented within the software give probability distributions and indications of the total resource needed for the experiment. In addition, the algorithms simulate results from the experiment and estimate the percentage of important or active main effects and interactions that fail to be detected. The approach is illustrated through the planning of an experiment on engine cold start optimization at Jaguar Cars.


2014 ◽  
Vol 52 ◽  
pp. 1-4 ◽  
Author(s):  
David E. Morris ◽  
Jeremy E. Oakley ◽  
John A. Crowe

Author(s):  
Anna Walaszek-Babiszewska

In the chapter an advanced fuzzy modeling methods have been presented which joints fuzzy and probabilistic approaches. The Zadeh's notions of probability of imprecise events has served as a basis for determining probability distributions of linguistic random variable, stochastic process with fuzzy states, and fuzzy-valued stochastic Markov process. Rule based fuzzy representations have been also presented. Exemplary calculations illustrate the way of building fuzzy models, using empirical data or theoretical probability distribution.


2021 ◽  
Author(s):  
Shobhit Singh ◽  
Somil Swarnkar ◽  
Rajiv Sinha

<p>Floods are one of the worst natural hazards around the globe and around 40% of all losses worldwide due to natural hazard have been caused by floods since 1980s. In India, more than 40 million hectares of area are affected by floods annually which makes it one of the worst affected country in the world. In particular, the Ganga river basin in northern India which hosts nearly half a billion people, is one of the worst floods affected regions in the country. The Ghaghra river is one of the highest discharge-carrying tributaries of the Ganga river, which originates from High Himalaya. Despite severally affected by floods each year, flood frequencies of the Ghaghra river are poorly understood, making it one of the least studied river basins in the Ganga basin. It is important to note that, like several other rivers in India, the Ghaghra also has several hydrological stations where only stage data is available, and therefore traditional flood frequency analysis using discharge data becomes difficult. In this work, we have performed flood frequency analysis using both stage and discharge dataset at three different gauge stations in the Ghaghra river basin to compare the results using statistical methods. The L-moment analysis is applied to assess the probability distribution for the flood frequency analysis. Further, we have used the TanDEM-x 90m digital elevation model (DEM) to map the flood inundation regions. Our results suggest the Weibull is statistically significant distribution for the discharge dataset. However, stage above danger level (SADL) follows General Pareto (GP3) and Generalized Extreme Value (GEV) distributions. The quantile-quantile plot analysis suggests that the SADL probability distributions (GP3 and GEV) are closely following the theoretical probability distributions. However, the discharge distribution (Weibull) is showing a relatively weak corelation with the theoretical probability distribution. We further used the probability distribution to assess the SADL frequencies at 5-, 10-, 20-, 50- and 100-year return periods. The magnitudes of SADL at different return periods were then used to map the water inundation areas around different gauging stations. These inundation maps were cross-validated with the globally available flooding extent maps provided by Dartmouth flood observatory. Overall, this work exhibits a simple and novel technique to generate inundation maps around the gauging locations without using any sophisticated hydraulics models.</p>


2020 ◽  
Vol 2 (1) ◽  
pp. 22
Author(s):  
Matthaios Saridakis ◽  
Mike Spiliotis ◽  
Panagiotis Angelidis ◽  
Basil Papadopoulos

In this article, an adjustment of the extreme theoretical probability distributions upon the sample is proposed, based on the conventional fuzzy linear regression model of Tanaka [1], where all the data must be included within the produced fuzzy band. This is achieved by using the quintile approach, which relates the observed return period with the theoretical cumulative probability. A new contribution of this work is the use of the fuzzified maximum likelihood, as a measure of goodness of fit. The model is applied for real data from the Strymonas River, regarding the annual maximum flow, and finally, useful conclusions are made.


Sign in / Sign up

Export Citation Format

Share Document