scholarly journals Generalized Modified Slash Birnbaum–Saunders Distribution

Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 724 ◽  
Author(s):  
Jimmy Reyes ◽  
Inmaculada Barranco-Chamorro ◽  
Diego Gallardo ◽  
Héctor Gómez

In this paper, a generalization of the modified slash Birnbaum–Saunders (BS) distribution is introduced. The model is defined by using the stochastic representation of the BS distribution, where the standard normal distribution is replaced by a symmetric distribution proposed by Reyes et al. It is proved that this new distribution is able to model more kurtosis than other extensions of BS previously proposed in the literature. Closed expressions are given for the pdf (probability density functio), along with their moments, skewness and kurtosis coefficients. Inference carried out is based on modified moments method and maximum likelihood (ML). To obtain ML estimates, two approaches are considered: Newton–Raphson and EM-algorithm. Applications reveal that it has potential for doing well in real problems.

2019 ◽  
Vol 27 (4) ◽  
pp. 243-251
Author(s):  
Weizhong Tian ◽  
Fengrong Wei

Abstract In this paper, the scale mixtures of multivariate skew slash distributions is introduced. The probability density function with some additional properties are discussed. The first four order moments, skewness and kurtosis of this distribution are calculated. Furthermore, the first two moments of its quadratic forms are obtained. In particular, the linear transformation, stochastic representation and hierarchical representation are studied. In the end, the EM algorithm is proposed.


Author(s):  
Nadia Hashim Al-Noor ◽  
Shurooq A.K. Al-Sultany

        In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson” and the “Expectation-Maximization” techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function in terms of their mean squared error values and integrated mean squared error values respectively.


2020 ◽  
Author(s):  
Ahmad Sudi Pratikno

In statistics, there are various terms that may feel unfamiliar to researcher who is not accustomed to discussing it. However, despite all of many functions and benefits that we can get as researchers to process data, it will later be interpreted into a conclusion. And then researcher can digest and understand the research findings. The distribution of continuous random opportunities illustrates obtaining opportunities with some detection of time, weather, and other data obtained from the field. The standard normal distribution represents a stable curve with zero mean and standard deviation 1, while the t distribution is used as a statistical test in the hypothesis test. Chi square deals with the comparative test on two variables with a nominal data scale, while the f distribution is often used in the ANOVA test and regression analysis.


2021 ◽  
Vol 13 (2) ◽  
pp. 51
Author(s):  
Lili Sun ◽  
Xueyan Liu ◽  
Min Zhao ◽  
Bo Yang

Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given this, we propose a novel noninformative prior-based interpretable variational graph autoencoder (NPIVGAE). Specifically, we exploit the noninformative prior as the prior distribution of latent variables. This prior enables the posterior distribution parameters to be almost learned from the sample data. Furthermore, we regard each dimension of a latent variable as the probability that the node belongs to each block, thereby improving the interpretability of the model. The correlation within and between blocks is described by a block–block correlation matrix. We compare our model with state-of-the-art methods on three real datasets, verifying its effectiveness and superiority.


1995 ◽  
Vol 12 (5) ◽  
pp. 515-527 ◽  
Author(s):  
Jeanine J. Houwing-Duistermaat ◽  
Lodewijk A. Sandkuijl ◽  
Arthur A. B. Bergen ◽  
Hans C. van Houwelingen

2021 ◽  
Vol 68 (1) ◽  
pp. 17-46
Author(s):  
Adam Korczyński

Statistical practice requires various imperfections resulting from the nature of data to be addressed. Data containing different types of measurement errors and irregularities, such as missing observations, have to be modelled. The study presented in the paper concerns the application of the expectation-maximisation (EM) algorithm to calculate maximum likelihood estimates, using an autoregressive model as an example. The model allows describing a process observed only through measurements with certain level of precision and through more than one data series. The studied series are affected by a measurement error and interrupted in some time periods, which causes the information for parameters estimation and later for prediction to be less precise. The presented technique aims to compensate for missing data in time series. The missing data appear in the form of breaks in the source of the signal. The adjustment has been performed by the EM algorithm to a hybrid version, supplemented by the Newton-Raphson method. This technique allows the estimation of more complex models. The formulation of the substantive model of an autoregressive process affected by noise is outlined, as well as the adjustment introduced to overcome the issue of missing data. The extended version of the algorithm has been verified using sampled data from a model serving as an example for the examined process. The verification demonstrated that the joint EM and Newton-Raphson algorithms converged with a relatively small number of iterations and resulted in the restoration of the information lost due to missing data, providing more accurate predictions than the original algorithm. The study also features an example of the application of the supplemented algorithm to some empirical data (in the calculation of a forecasted demand for newspapers).


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2139
Author(s):  
Xiuqiong Chen ◽  
Jiayi Kang ◽  
Mina Teicher ◽  
Stephen S.-T. Yau

Nonlinear filtering is of great significance in industries. In this work, we develop a new linear regression Kalman filter for discrete nonlinear filtering problems. Under the framework of linear regression Kalman filter, the key step is minimizing the Kullback–Leibler divergence between standard normal distribution and its Dirac mixture approximation formed by symmetric samples so that we can obtain a set of samples which can capture the information of reference density. The samples representing the conditional densities evolve in a deterministic way, and therefore we need less samples compared with particle filter, as there is less variance in our method. The numerical results show that the new algorithm is more efficient compared with the widely used extended Kalman filter, unscented Kalman filter and particle filter.


Sign in / Sign up

Export Citation Format

Share Document