scholarly journals Bayesian Inference of a Multivariate Regression Model

2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Marick S. Sinay ◽  
John S. J. Hsu

We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

2018 ◽  
Vol 33 ◽  
pp. 24-40 ◽  
Author(s):  
Jolanta Pielaszkiewicz ◽  
Dietrich Von Rosen ◽  
Martin Singull

The joint distribution of standardized traces of $\frac{1}{n}XX'$ and of $\Big(\frac{1}{n}XX'\Big)^2$, where the matrix $X:p\times n$ follows a matrix normal distribution is proved asymptotically to be multivariate normal under condition $\frac{{n}}{p}\overset{n,p\rightarrow\infty}{\rightarrow}c>0$. Proof relies on calculations of asymptotic moments and cumulants obtained using a recursive formula derived in Pielaszkiewicz et al. (2015). The covariance matrix of the underlying vector is explicitely given as a function of $n$ and $p$.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 156
Author(s):  
Andriette Bekker ◽  
Johannes T. Ferreira ◽  
Schalk W. Human ◽  
Karien Adamski

This research is inspired from monitoring the process covariance structure of q attributes where samples are independent, having been collected from a multivariate normal distribution with known mean vector and unknown covariance matrix. The focus is on two matrix random variables, constructed from different Wishart ratios, that describe the process for the two consecutive time periods before and immediately after the change in the covariance structure took place. The product moments of these constructed random variables are highlighted and set the scene for a proposed measure to enable the practitioner to calculate the run-length probability to detect a shift immediately after a change in the covariance matrix occurs. Our results open a new approach and provides insight for detecting the change in the parameter structure as soon as possible once the underlying process, described by a multivariate normal process, encounters a permanent/sustained upward or downward shift.


Author(s):  
Alice Cortinovis ◽  
Daniel Kressner

AbstractRandomized trace estimation is a popular and well-studied technique that approximates the trace of a large-scale matrix B by computing the average of $$x^T Bx$$ x T B x for many samples of a random vector X. Often, B is symmetric positive definite (SPD) but a number of applications give rise to indefinite B. Most notably, this is the case for log-determinant estimation, a task that features prominently in statistical learning, for instance in maximum likelihood estimation for Gaussian process regression. The analysis of randomized trace estimates, including tail bounds, has mostly focused on the SPD case. In this work, we derive new tail bounds for randomized trace estimates applied to indefinite B with Rademacher or Gaussian random vectors. These bounds significantly improve existing results for indefinite B, reducing the number of required samples by a factor n or even more, where n is the size of B. Even for an SPD matrix, our work improves an existing result by Roosta-Khorasani and Ascher (Found Comput Math, 15(5):1187–1212, 2015) for Rademacher vectors. This work also analyzes the combination of randomized trace estimates with the Lanczos method for approximating the trace of f(B). Particular attention is paid to the matrix logarithm, which is needed for log-determinant estimation. We improve and extend an existing result, to not only cover Rademacher but also Gaussian random vectors.


2009 ◽  
Vol 28 (25) ◽  
pp. 3139-3157 ◽  
Author(s):  
Jaeil Ahn ◽  
Bhramar Mukherjee ◽  
Mousumi Banerjee ◽  
Kathleen A. Cooney

2018 ◽  
Vol 146 (12) ◽  
pp. 3949-3976 ◽  
Author(s):  
Herschel L. Mitchell ◽  
P. L. Houtekamer ◽  
Sylvain Heilliette

Abstract A column EnKF, based on the Canadian global EnKF and using the RTTOV radiative transfer (RT) model, is employed to investigate issues relating to the EnKF assimilation of Advanced Microwave Sounding Unit-A (AMSU-A) radiance measurements. Experiments are performed with large and small ensembles, with and without localization. Three different descriptions of background temperature error are considered: 1) using analytical vertical modes and hypothetical spectra, 2) using the vertical modes and spectrum of a covariance matrix obtained from the global EnKF after 2 weeks of cycling, and 3) using the vertical modes and spectrum of the static background error covariance matrix employed to initiate a global data assimilation cycle. It is found that the EnKF performs well in some of the experiments with background error description 1, and yields modest error reductions with background error description 3. However, the EnKF is virtually unable to reduce the background error (even when using a large ensemble) with background error description 2. To analyze these results, the different background error descriptions are viewed through the prism of the RT model by comparing the trace of the matrix , where is the RT model and is the background error covariance matrix. Indeed, this comparison is found to explain the difference in the results obtained, which relates to the degree to which deep modes are, or are not, present in the different background error covariances. The results suggest that, after 2 weeks of cycling, the global EnKF has virtually eliminated all background error structures that can be “seen” by the AMSU-A radiances.


2013 ◽  
Vol 63 (2) ◽  
Author(s):  
Nur Syahidah Yusoff ◽  
Maman Abdurachman Djauhari

The stability of covariance matrix is a major issue in multivariate analysis. As can be seen in the literature, the most popular and widely used tests are Box M-test and Jennrich J-test introduced by Box in 1949 and Jennrich in 1970, respectively. These tests involve determinant of sample covariance matrix as multivariate dispersion measure. Since it is only a scalar representation of a complex structure, it cannot represent the whole structure. On the other hand, they are quite cumbersome to compute when the data sets are of high dimension since they do not only involve the computation of determinant of covariance matrix but also the inversion of a matrix. This motivates us to propose a new statistical test which is computationally more efficient and, if it is used simultaneously with M-test or J-test, we will have a better understanding about the stability of covariance structure. An example will be presented to illustrate its advantage


Author(s):  
Marta Savkina

In the paper in case heteroscedastic independent deviations a regression model whose function has the form $f(x) = ax^2+bx+c$, where $a$, $b$ and $c$ are unknown parameters, is studied. Approximate values (observations) of functions $f(x)$ are registered at equidistant points of a line segment. The theorem which is proved at the paper gives a sufficient condition on the variance of the deviations at which the Aitken estimation of parameter $a$ coincides with its estimation of the LS in the case of odd number of observation points and bisymmetric covariance matrix. Under this condition, the Aitken and LS estimations of $b$ and $c$ will not coincide. The proof of the theorem consists of the following steps. First, the original system of polynomials is simplified: we get the system polynomials of the second degree. The variables of both systems are unknown variances of deviations, each of the solutions of the original system gives a set variances of deviations at which the estimations of Aitken and LS parameter a coincide. In the next step the solving of the original system polynomials is reduced to solving an equation with three unknowns, and all other unknowns are expressed in some way through these three. At last it is proved that there are positive unequal values of these three unknowns, which will be the solution of the obtained equation. And all other unknowns when substituting in their expression these values will be positive.


Sign in / Sign up

Export Citation Format

Share Document