scholarly journals Saddlepoint Approximation for Data in Simplices: A Review with New Applications

Stats ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 121-147
Author(s):  
Riccardo Gatto

This article provides a review of the saddlepoint approximation for a M-statistic of a sample of nonnegative random variables with fixed sum. The sample vector follows the multinomial, the multivariate hypergeometric, the multivariate Polya or the Dirichlet distributions. The main objective is to provide a complete presentation in terms of a single and unambiguous notation of the common mathematical framework of these four situations: the simplex sample space and the underlying general urn model. Some important applications are reviewed and special attention is given to recent applications to models of circular data. Some novel applications are developed and studied numerically.

Risks ◽  
2018 ◽  
Vol 6 (3) ◽  
pp. 91 ◽  
Author(s):  
Riccardo Gatto

In this article we introduce the stability analysis of a compound sum: it consists of computing the standardized variation of the survival function of the sum resulting from an infinitesimal perturbation of the common distribution of the summands. Stability analysis is complementary to the classical sensitivity analysis, which consists of computing the derivative of an important indicator of the model, with respect to a model parameter. We obtain a computational formula for this stability from the saddlepoint approximation. We apply the formula to the compound Poisson insurer loss with gamma individual claim amounts and to the compound geometric loss with Weibull individual claim amounts.


2016 ◽  
Author(s):  
Michael Maraun ◽  
Moritz Heene

There has come to exist within the psychometric literature a generalized belief to the effect that a determination of the level of factorial invariance that holds over a set of k populations Δj, j = 1..s, is central to ascertaining whether or not the common factor random variables ξj, j = 1..s, are equivalent. In the current manuscript, a technical examination of this belief is undertaken. The chief conclusion of the work is that, as long as technical, statistical senses of random variable equivalence are adhered to, the belief is unfounded.


1987 ◽  
Vol 19 (2) ◽  
pp. 454-473 ◽  
Author(s):  
E. G. Coffman ◽  
L. Flatto ◽  
R. R. Weber

We model a selection process arising in certain storage problems. A sequence (X1, · ··, Xn) of non-negative, independent and identically distributed random variables is given. F(x) denotes the common distribution of the Xi′s. With F(x) given we seek a decision rule for selecting a maximum number of the Xi′s subject to the following constraints: (1) the sum of the elements selected must not exceed a given constant c > 0, and (2) the Xi′s must be inspected in strict sequence with the decision to accept or reject an element being final at the time it is inspected.We prove first that there exists such a rule of threshold type, i.e. the ith element inspected is accepted if and only if it is no larger than a threshold which depends only on i and the sum of the elements already accepted. Next, we prove that if F(x) ~ Axα as x → 0 for some A, α> 0, then for fixed c the expected number, En(c), selected by an optimal threshold is characterized by Asymptotics as c → ∞and n → ∞with c/n held fixed are derived, and connections with several closely related, well-known problems are brought out and discussed.


1971 ◽  
Vol 14 (3) ◽  
pp. 451-452
Author(s):  
M. V. Menon ◽  
V. Seshadri

Let X1, X2, …, be a sequence of independent and identically distributed random variables, with the common distribution function F(x). The sequence is said to be normally attracted to a stable law V with characteristic exponent α, if for some an (converges in distribution to V). Necessary and sufficient conditions for normal attraction are known (cf [1, p. 181]).


Author(s):  
Johan O¨lvander ◽  
Xiaolong Feng ◽  
Bo Holmgren

Product family design is a well recognized method to address the demands of mass customization. A potential drawback of product families is that the performance of individual members are reduced due to the constraints added by the common platform, i.e. parts and components need to be shared by other family members. This paper presents a formal mathematical framework where the product family design problem is stated as an optimization problem and where optimization is used to find an optimal product family. The object of study is kinematics design of a family of industrial robots. The robot is a serial manipulator where different robots share arms from a common platform. The objective is to show the trade-off between the size of the common platform and the kinematics performance of the robot.


2014 ◽  
Vol 14 (21) ◽  
pp. 11791-11815 ◽  
Author(s):  
I. Kioutsioukis ◽  
S. Galmarini

Abstract. Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. Theoretical aspects like the bias–variance–covariance decomposition and the accuracy–diversity decomposition are linked together and support the importance of creating ensemble that incorporates both these elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi-model ensembles. The sophisticated ensemble averaging techniques, following proper training, were shown to have higher skill across all distribution bins compared to solely ensemble averaging forecasts.


1994 ◽  
Vol 31 (01) ◽  
pp. 256-261
Author(s):  
S. R. Adke ◽  
C. Chandran

Let {ξ n , n ≧1} be a sequence of independent real random variables, F denote the common distribution function of identically distributed random variables ξ n , n ≧1 and let ξ 1 have an arbitrary distribution. Define Xn+ 1 = k max(Xn, ξ n +1), Yn + 1 = max(Yn, ξ n +1) – c, Un +1 = l min(Un, ξ n +1), Vn+ 1 = min(Vn, ξ n +1) + c, n ≧ 1, 0 < k < 1, l > 1, 0 < c < ∞, and X 1 = Υ 1 = U 1 = V 1 = ξ 1. We establish conditions under which the limit law of max(X 1, · ··, Xn ) coincides with that of max(ξ 2, · ··, ξ n+ 1) when both are appropriately normed. A similar exercise is carried out for the extreme statistics max(Y 1, · ··, Yn ), min(U 1,· ··, Un ) and min(V 1, · ··, Vn ).


2019 ◽  
Vol 36 (4) ◽  
pp. 1169-1200
Author(s):  
Yunfei Zu ◽  
Wenliang Fan ◽  
Jingyao Zhang ◽  
Zhengling Li ◽  
Makoto Ohsaki

Purpose Conversion of the correlated random variables into independent variables, especially into independent standard normal variables, is the common technology for estimating the statistical moments of response and evaluating reliability of random system, in which calculating the equivalent correlation coefficient is an important component. The purpose of this paper is to investigate an accurate, efficient and easy to implement estimation method for the equivalent correlation coefficient of various incomplete probability systems. Design/methodology/approach First, an approach based on the Mehler’s formula for evaluating the equivalent correlation coefficient is introduced, then, by combining with polynomial normal transformations, this approach is improved to be valid for various incomplete probability systems, which is named as the direct method. Next, with the convenient linear reference variables for eight frequently used random variables and the approximation of the Rosenblatt transformation introduced, a further improved implementation without iteration process is developed, which is named as the simplified method. Finally, several examples are investigated to verify the characteristics of the proposed methods. Findings The results of the examples in this paper show that both the proposed two methods are of high accuracy, by comparison, the proposed simplified method is more effective and convenient. Originality/value Based on the Mehler’s formula, two practical implementations for evaluating the equivalent correlation coefficient are proposed, which are accurate, efficient, easy to implement and valid for various incomplete probability systems.


1999 ◽  
Vol 36 (1) ◽  
pp. 132-138
Author(s):  
M. P. Quine ◽  
W. Szczotka

We define a stochastic process {Xn} based on partial sums of a sequence of integer-valued random variables (K0,K1,…). The process can be represented as an urn model, which is a natural generalization of a gambling model used in the first published exposition of the criticality theorem of the classical branching process. A special case of the process is also of interest in the context of a self-annihilating branching process. Our main result is that when (K1,K2,…) are independent and identically distributed, with mean a ∊ (1,∞), there exist constants {cn} with cn+1/cn → a as n → ∞ such that Xn/cn converges almost surely to a finite random variable which is positive on the event {Xn ↛ 0}. The result is extended to the case of exchangeable summands.


Sign in / Sign up

Export Citation Format

Share Document