316. Note: Graphical Monte Carlo Type I Error Rates for Multiple Comparison Procedures

Biometrics ◽  
1971 ◽  
Vol 27 (3) ◽  
pp. 738 ◽  
Author(s):  
Thomas J. Boardman ◽  
Donald R. Moffitt
2014 ◽  
Vol 38 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Daniel Furtado Ferreira

Sisvar is a statistical analysis system with a large usage by the scientific community to produce statistical analyses and to produce scientific results and conclusions. The large use of the statistical procedures of Sisvar by the scientific community is due to it being accurate, precise, simple and robust. With many options of analysis, Sisvar has a not so largely used analysis that is the multiple comparison procedures using bootstrap approaches. This paper aims to review this subject and to show some advantages of using Sisvar to perform such analysis to compare treatments means. Tests like Dunnett, Tukey, Student-Newman-Keuls and Scott-Knott are performed alternatively by bootstrap methods and show greater power and better controls of experimentwise type I error rates under non-normal, asymmetric, platykurtic or leptokurtic distributions.


1998 ◽  
Vol 55 (9) ◽  
pp. 2127-2140 ◽  
Author(s):  
Brian J Pyper ◽  
Randall M Peterman

Autocorrelation in fish recruitment and environmental data can complicate statistical inference in correlation analyses. To address this problem, researchers often either adjust hypothesis testing procedures (e.g., adjust degrees of freedom) to account for autocorrelation or remove the autocorrelation using prewhitening or first-differencing before analysis. However, the effectiveness of methods that adjust hypothesis testing procedures has not yet been fully explored quantitatively. We therefore compared several adjustment methods via Monte Carlo simulation and found that a modified version of these methods kept Type I error rates near . In contrast, methods that remove autocorrelation control Type I error rates well but may in some circumstances increase Type II error rates (probability of failing to detect some environmental effect) and hence reduce statistical power, in comparison with adjusting the test procedure. Specifically, our Monte Carlo simulations show that prewhitening and especially first-differencing decrease power in the common situations where low-frequency (slowly changing) processes are important sources of covariation in fish recruitment or in environmental variables. Conversely, removing autocorrelation can increase power when low-frequency processes account for only some of the covariation. We therefore recommend that researchers carefully consider the importance of different time scales of variability when analyzing autocorrelated data.


1992 ◽  
Vol 17 (4) ◽  
pp. 297-313 ◽  
Author(s):  
Michael R. Harwell

Monte Carlo studies provide information that can assist researchers in selecting a statistical test when underlying assumptions of the test are violated. Effective use of this literature is hampered by the lack of an overarching theory to guide the interpretation of Monte Carlo studies. The problem is exacerbated by the impressionistic nature of the studies, which can lead different readers to different conclusions. These shortcomings can be addressed using meta-analytic methods to integrate the results of Monte Carlo studies. Quantitative summaries of the effects of assumption violations on the Type I error rate and power of a test can assist researchers in selecting the best test for their data. Such summaries can also be used to evaluate the validity of previously published statistical results. This article provides a methodological framework for quantitatively integrating Type I error rates and power values for Monte Carlo studies. An example is provided using Monte Carlo studies of Bartlett’s (1937) test of equality of variances. The importance of relating meta-analytic results to exact statistical theory is emphasized.


Methodology ◽  
2009 ◽  
Vol 5 (2) ◽  
pp. 60-70 ◽  
Author(s):  
W. Holmes Finch ◽  
Teresa Davenport

Permutation testing has been suggested as an alternative to the standard F approximate tests used in multivariate analysis of variance (MANOVA). These approximate tests, such as Wilks’ Lambda and Pillai’s Trace, have been shown to perform poorly when assumptions of normally distributed dependent variables and homogeneity of group covariance matrices were violated. Because Monte Carlo permutation tests do not rely on distributional assumptions, they may be expected to work better than their approximate cousins when the data do not conform to the assumptions described above. The current simulation study compared the performance of four standard MANOVA test statistics with their Monte Carlo permutation-based counterparts under a variety of conditions with small samples, including conditions when the assumptions were met and when they were not. Results suggest that for sample sizes of 50 subjects, power is very low for all the statistics. In addition, Type I error rates for both the approximate F and Monte Carlo tests were inflated under the condition of nonnormal data and unequal covariance matrices. In general, the performance of the Monte Carlo permutation tests was slightly better in terms of Type I error rates and power when both assumptions of normality and homogeneous covariance matrices were not met. It should be noted that these simulations were based upon the case with three groups only, and as such results presented in this study can only be generalized to similar situations.


1981 ◽  
Vol 48 (1) ◽  
pp. 19-22 ◽  
Author(s):  
James D. Church ◽  
Edward L. Wike

A Monte Carlo study was done to find the Type I error rates for three nonparametric procedures for making k − 1 many-one comparisons in a one-way design. The tests ( t) were the Silverstein and Steel many-one ranks tests and the two-sample Wilcoxon rank-sum test, k = 3, 5, 7, and 10 treatments were crossed with n = 7, 10, and 15 replicates with 1000 simulations per k, n combination. Analyses of four Type I error rates showed that: (1) The Wilcoxon test had the best comparisonwise error rates; (2) none of the tests functioned well as protected tests; and (3) the Silverstein test had the best experimentwise error rates and was the recommended procedure for many-one tests for a one-way layout.


Sign in / Sign up

Export Citation Format

Share Document