scholarly journals Testing brnbu ageing class of life-time distribution based on moment inequality

10.26524/cm87 ◽  
2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Touseef Ahmed ◽  
Rizwan U

In this paper, new moment inequality is derived for Bivariate Renewal New Better than Used (BRNBU) ageing class of life-time distribution. This inequality demonstrates that if the mean life is finite, then all higher order moments exist. Based on the Moment inequality, new testing procedures for testing bivariate exponentiality against BRNBU ageing class of life-time distribution is introduced.The asymptotic normality of the test statistic and its consistency are studied. Using Monte Carlo Method, critical values of the proposed test are calculated for  n= 5(5)100  and tabulated. Finally, the theoretical results are applied to analyze real-life data sets.

Author(s):  
Umar Kabir ◽  
Terna Godfrey IEREN

This article proposed a new distribution referred to as the transmuted Exponential Lomax distribution as an extension of the popular Lomax distribution in the form of Exponential Lomax by using the Quadratic rank transmutation map proposed and studied in earlier research. Using the transmutation map, we defined the probability density function (PDF) and cumulative distribution function (CDF) of the transmuted Exponential Lomax distribution. Some properties of the new distribution were extensively studied after derivation. The estimation of the distribution’s parameters was also done using the method of maximum likelihood estimation. The performance of the proposed probability distribution was checked in comparison with some other generalizations of Lomax distribution using three real-life data sets. The results obtained indicated that TELD performs better than the other distributions comprising power Lomax, Exponential-Lomax, and the Lomax distributions.


2016 ◽  
Vol 8 (1) ◽  
pp. 78-98 ◽  
Author(s):  
Dániel Topál ◽  
István Matyasovszkyt ◽  
Zoltán Kern ◽  
István Gábor Hatvani

AbstractTime series often contain breakpoints of different origin, i.e. breakpoints, caused by (i) shifts in trend, (ii) other changes in trend and/or, (iii) changes in variance. In the present study, artificially generated time series with white and red noise structures are analyzed using three recently developed breakpoint detection methods. The time series are modified so that the exact “locations” of the artificial breakpoints are prescribed, making it possible to evaluate the methods exactly. Hence, the study provides a deeper insight into the behaviour of the three different breakpoint detection methods. Utilizing this experience can help solving breakpoint detection problems in real-life data sets, as is demonstrated with two examples taken from the fields of paleoclimate research and petrology.


Author(s):  
Fatoki Olayode ◽  
Phillips Samuel Ademola ◽  
Adeleye Najeem Friday

In this paper, a three parameter life time model named Type II Topp-Leone Gumbel type-2 distribution which can be used to model reliability problems, fatigue life studies, and survival data has been studied. We derived explicit expressions for some of its statistical properties such as ordinary moments, generating function, incomplete moments, and order statistics. The maximum likelihood estimation technique is used to estimate the parameters of the model. The tractability of the model was illustrated by using two real life data sets. The proposed distribution provides a better fit than some well known distributions using criteria of criteria of goodness of fit.


Author(s):  
Barinaadaa John Nwikpe

A new sole parameter probability distribution named the Tornumonkpe distribution has been derived in this paper. The new model is a blend of gamma (2,  and gamma(3  distributions. The shape of its density for different values of the parameter has been shown.  The mathematical expression for the moment generating function, the first three raw moments, the second and third moments about the mean, the distribution of order statistics, coefficient of variation and coefficient of skewness has been given. The parameter of the new distribution was estimated using the method of maximum likelihood. The goodness of fit of the Tornumonkpe distribution was established by fitting the distribution to three real life data sets. Using -2lnL, Bayesian Information Criterion (BIC), and Akaike Information Criterion(AIC) as criterial for selecting the best fitting model, it was revealed that the new distribution outperforms the one parameter exponential, Shanker and Amarendra distributions for the data sets used.


2008 ◽  
pp. 1231-1249
Author(s):  
Jaehoon Kim ◽  
Seong Park

Much of the research regarding streaming data has focused only on real time querying and analysis of recent data stream allowable in memory. However, as data stream mining, or tracking of past data streams, is often required, it becomes necessary to store large volumes of streaming data in stable storage. Moreover, as stable storage has restricted capacity, past data stream must be summarized. The summarization must be performed periodically because streaming data flows continuously, quickly, and endlessly. Therefore, in this paper, we propose an efficient periodic summarization method with a flexible storage allocation. It improves the overall estimation error by flexibly adjusting the size of the summarized data of each local time section. Additionally, as the processing overhead of compression and the disk I/O cost of decompression can be an important factor for quick summarization, we also consider setting the proper size of data stream to be summarized at a time. Some experimental results with artificial data sets as well as real life data show that our flexible approach is more efficient than the existing fixed approach.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Adewale F. Lukman ◽  
B. M. Golam Kibria ◽  
Kayode Ayinde ◽  
Segun L. Jegede

Motivated by the ridge regression (Hoerl and Kennard, 1970) and Liu (1993) estimators, this paper proposes a modified Liu estimator to solve the multicollinearity problem for the linear regression model. This modification places this estimator in the class of the ridge and Liu estimators with a single biasing parameter. Theoretical comparisons, real-life application, and simulation results show that it consistently dominates the usual Liu estimator. Under some conditions, it performs better than the ridge regression estimators in the smaller MSE sense. Two real-life data are analyzed to illustrate the findings of the paper and the performances of the estimators assessed by MSE and the mean squared prediction error. The application result agrees with the theoretical and simulation results.


2008 ◽  
Vol 20 (4) ◽  
pp. 1042-1064
Author(s):  
Maciej Pedzisz ◽  
Danilo P. Mandic

A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.


Author(s):  
SANGHAMITRA BANDYOPADHYAY ◽  
UJJWAL MAULIK ◽  
MALAY KUMAR PAKHIRA

An efficient partitional clustering technique, called SAKM-clustering, that integrates the power of simulated annealing for obtaining minimum energy configuration, and the searching capability of K-means algorithm is proposed in this article. The clustering methodology is used to search for appropriate clusters in multidimensional feature space such that a similarity metric of the resulting clusters is optimized. Data points are redistributed among the clusters probabilistically, so that points that are farther away from the cluster center have higher probabilities of migrating to other clusters than those which are closer to it. The superiority of the SAKM-clustering algorithm over the widely used K-means algorithm is extensively demonstrated for artificial and real life data sets.


2016 ◽  
Vol 33 (6) ◽  
pp. 1352-1386 ◽  
Author(s):  
Herold Dehling ◽  
Daniel Vogel ◽  
Martin Wendler ◽  
Dominik Wied

For a bivariate time series ((Xi ,Yi))i=1,...,n, we want to detect whether the correlation between Xi and Yi stays constant for all i = 1,...n. We propose a nonparametric change-point test statistic based on Kendall’s tau. The asymptotic distribution under the null hypothesis of no change follows from a new U-statistic invariance principle for dependent processes. Assuming a single change-point, we show that the location of the change-point is consistently estimated. Kendall’s tau possesses a high efficiency at the normal distribution, as compared to the normal maximum likelihood estimator, Pearson’s moment correlation. Contrary to Pearson’s correlation coefficient, it shows no loss in efficiency at heavy-tailed distributions, and is therefore particularly suited for financial data, where heavy tails are common. We assume the data ((Xi ,Yi))i=1,...,n to be stationary and P-near epoch dependent on an absolutely regular process. The P-near epoch dependence condition constitutes a generalization of the usually considered Lp-near epoch dependence allowing for arbitrarily heavy-tailed data. We investigate the test numerically, compare it to previous proposals, and illustrate its application with two real-life data examples.


Author(s):  
Mohamed Ibrahim Mohamed ◽  
Laba Handique ◽  
Subrata Chakraborty ◽  
Nadeem Shafique Butt ◽  
Haitham M. Yousof

In this article an attempt is made to introduce a new extension of the Fréchet model called the Xgamma Fréchet model. Some of its properties are derived. The estimation of the parameters via different estimation methods are discussed. The performances of the proposed estimation methods are investigated through simulations as well as real life data sets. The potentiality of the proposed model is established through modelling of two real life data sets. The results have shown clear preference for the proposed model compared to several know competing ones.


Sign in / Sign up

Export Citation Format

Share Document