Type I Error Rates for Yao's and James' Tests of Equality of Mean Vectors under Variance-Covariance Heteroscedasticity

1988 ◽  
Vol 13 (3) ◽  
pp. 281 ◽  
Author(s):  
James Algina ◽  
Kezhen L. Tang
1991 ◽  
Vol 16 (2) ◽  
pp. 125-139 ◽  
Author(s):  
James Algina ◽  
Takako C. Oshima ◽  
K. Linda Tang

Type I error rates for Yao’s, James’ first order, James’ second order, and Johansen’s tests of equality of mean vectors for two independent samples were estimated for various conditions defined by the degree of heteroscedasticity and nonnormality (uniform, Laplace, t(5), beta (5, 1.5), exponential, and lognormal distributions). For these alternatives to Hotelling’s T2, variance-covariance homogeneity is not an assumption. Although the four procedures can be seriously nonrobust with exponential and lognormal distributions, they were fairly robust with the rest of the distributions. The performance of Yao’s test, James’ second order test, and Johansen’s test was slightly superior to the performance of James’ first order test.


1988 ◽  
Vol 13 (3) ◽  
pp. 281-290 ◽  
Author(s):  
James Algina ◽  
Kezhen L. Tang

For Yao’s and James’ tests, Type I error rates were estimated for various combinations of the number of variables (p), samplesize ratio (n1: n2), sample-size-to-variables ratio, and degree of heteroscedasticity. These tests are alternatives to Hotelling’s T2 and are intended for use when the variance-covariance matrices are not equal in a study using two independent samples. The performance of Yao’s test was superior to that of James’. Yao’s test had appropriate Type I error rates when p ≥ 10, (n1 + n2)/p ≥ 10, and 1:2 ≤ n1:n2 ≤ 2:1. When (n1 + n2)/p = 20, Yao’s test was robust when n1: n2 was 5:1, 3:1, and 4:1 and p was 2, 6, and 10, respectively.


2019 ◽  
Vol 14 (2) ◽  
pp. 399-425 ◽  
Author(s):  
Haolun Shi ◽  
Guosheng Yin

2014 ◽  
Vol 38 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Daniel Furtado Ferreira

Sisvar is a statistical analysis system with a large usage by the scientific community to produce statistical analyses and to produce scientific results and conclusions. The large use of the statistical procedures of Sisvar by the scientific community is due to it being accurate, precise, simple and robust. With many options of analysis, Sisvar has a not so largely used analysis that is the multiple comparison procedures using bootstrap approaches. This paper aims to review this subject and to show some advantages of using Sisvar to perform such analysis to compare treatments means. Tests like Dunnett, Tukey, Student-Newman-Keuls and Scott-Knott are performed alternatively by bootstrap methods and show greater power and better controls of experimentwise type I error rates under non-normal, asymmetric, platykurtic or leptokurtic distributions.


2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


Sign in / Sign up

Export Citation Format

Share Document