scholarly journals The Sampling Distribution of the Total Correlation for Multivariate Gaussian Random Variables

Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 921 ◽  
Author(s):  
Taylor Rowe ◽  
Troy Day

The sampling distribution of the total correlation (TC) for a d-dimensional standardized multivariate Gaussian random variable with an identity covariance matrix is derived. It is shown to be the distribution of a sum of generalized beta random variables. It is also shown that, for large dimension and sample size, a central limit theorem holds, providing a Gaussian approximation to the sampling distribution for high dimensional data.

Author(s):  
Marshall A. Taylor

Understanding the central limit theorem is crucial for comprehending parametric inferential statistics. Despite this, undergraduate and graduate students alike often struggle with grasping how the theorem works and why researchers rely on its properties to draw inferences from a single unbiased random sample. In this article, I outline a new command, sdist, that can be used to simulate the central limit theorem by generating a matrix of randomly generated normal or nonnormal variables and comparing the true sampling distribution standard deviation with the standard error from the first randomly generated sample. The user also has the option of plotting the empirical sampling distribution of sample means, the first random variable distribution, and a stacked visualization of the two distributions.


Author(s):  
Jean Walrand

AbstractChapter 10.1007/978-3-030-49995-2_3 used the Central Limit Theorem to determine the number of users that can safely share a common cable or link. We saw that this result is also fundamental to calculate confidence intervals. In this section, we prove this theorem. A key tool is the characteristic function that provides a simple way to study sums of independent random variables.Section 4.1 introduces the characteristic function and calculates it for a Gaussian random variable. Section 4.2 uses that function to prove the Central Limit Theorem. Section 4.3 uses the characteristic function to calculate the moments of a Gaussian random variable. The sum of squares of Gaussian random variables is a common model of noise in communication links. Section 4.4 proves a remarkable property of such a sum. Section 4.5 shows how to use characteristic functions to approximate binomial and geometric random variables. The error function arises in the calculation of the probability of errors in transmission systems and also in decisions based on random observations. Section 4.6 derives useful approximations of that function. Section 4.7 concludes the chapter with a discussion of an adaptive multiple access protocol similar to one used in WiFi networks.


2019 ◽  
Vol 15 (2) ◽  
pp. 15-28
Author(s):  
H. Gzyl

Abstract The metric properties of the set in which random variables take their values lead to relevant probabilistic concepts. For example, the mean of a random variable is a best predictor in that it minimizes the L2 distance between a point and a random variable. Similarly, the median is the same concept but when the distance is measured by the L1 norm. Also, a geodesic distance can be defined on the cone of strictly positive vectors in ℝn in such a way that, the minimizer of the distance between a point and a collection of points is their geometric mean. That geodesic distance induces a distance on the class of strictly positive random variables, which in turn leads to an interesting notions of conditional expectation (or best predictors) and their estimators. It also leads to different versions of the Law of Large Numbers and the Central Limit Theorem. For example, the lognormal variables appear as the analogue of the Gaussian variables for version of the Central Limit Theorem in the logarithmic distance.


2018 ◽  
Author(s):  
Marshall A. Taylor

Understanding the central limit theorem is crucial for comprehending parametric inferential statistics. Despite this, undergraduate and graduate students alike often struggle with grasping how the theorem works and why researchers rely on its properties to draw inferences from a single unbiased random sample. In this paper, I outline a new Stata package, sdist, which can be used to simulate the central limit theorem by generating a matrix of randomly generated normal or non-normal variables and comparing the true sampling distribution standard deviation to the standard error from the first randomly-generated sample. The user also has the option of plotting the empirical sampling distribution of sample means, the first random variable distribution, and a stacked visualization of the two distributions.


Filomat ◽  
2017 ◽  
Vol 31 (14) ◽  
pp. 4369-4377 ◽  
Author(s):  
Stefano Belloni

In this note, we prove a conjecture of Shang about the sum of a random number Nn of m-dependent random variables. The random number Nn is supposed to converge in probability toward a positive random variable.


2021 ◽  
Vol 36 (2) ◽  
pp. 243-255
Author(s):  
Wei Liu ◽  
Yong Zhang

AbstractIn this paper, we investigate the central limit theorem and the invariance principle for linear processes generated by a new notion of independently and identically distributed (IID) random variables for sub-linear expectations initiated by Peng [19]. It turns out that these theorems are natural and fairly neat extensions of the classical Kolmogorov’s central limit theorem and invariance principle to the case where probability measures are no longer additive.


2021 ◽  
Vol 499 (1) ◽  
pp. 124982
Author(s):  
Benjamin Avanzi ◽  
Guillaume Boglioni Beaulieu ◽  
Pierre Lafaye de Micheaux ◽  
Frédéric Ouimet ◽  
Bernard Wong

Sign in / Sign up

Export Citation Format

Share Document