scholarly journals Multiple Comparison Procedures for the Differences of Proportion Parameters in Over-Reported Multiple-Sample Binomial Data

Stats ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. 56-67
Author(s):  
Dewi Rahardja

In sequential tests, typically a (pairwise) multiple comparison procedure (MCP) is performed after an omnibus test (an overall equality test). In general, when an omnibus test (e.g., overall equality of multiple proportions test) is rejected, then we further conduct a (pairwise) multiple comparisons or MCPs to determine which (e.g., proportions) pairs the significant differences came from. In this article, via likelihood-based approaches, we acquire three confidence intervals (CIs) for comparing each pairwise proportion difference in the presence of over-reported binomial data. Our closed-form algorithm is easy to implement. As a result, for multiple-sample proportions differences, we can easily apply MCP adjustment methods (e.g., Bonferroni, Šidák, and Dunn) to address the multiplicity issue, unlike previous literatures. We illustrate our procedures to a real data example.

1986 ◽  
Vol 20 (3) ◽  
pp. 350-359 ◽  
Author(s):  
Wayne Hall ◽  
Kevin D. Bird

Methods are outlined for performing simultaneous multiple comparisons between groups when the dependent variable is one in which subjects are assigned to one of two or more categories. These methods provide tests which are analogous to Scheffe- and Bonferroni-adjusted tests of contrasts in the analysis of variance. Examples are provided of each of these procedures.


2008 ◽  
Vol 6 ◽  
pp. 117693510800600 ◽  
Author(s):  
Hongmei Jiang ◽  
R.W. Doerge

Whole genome microarray investigations (e.g. differential expression, differential methylation, ChlP-Chip) provide opportunities to test millions of features in a genome. Traditional multiple comparison procedures such as familywise error rate (FWER) controlling procedures are too conservative. Although false discovery rate (FDR) procedures have been suggested as having greater power, the control itself is not exact and depends on the proportion of true null hypotheses. Because this proportion is unknown, it has to be accurately (small bias, small variance) estimated, preferably using a simple calculation that can be made accessible to the general scientific community. We propose an easy-to-implement method and make the R code available, for estimating the proportion of true null hypotheses. This estimate has relatively small bias and small variance as demonstrated by (simulated and real data) comparing it with four existing procedures. Although presented here in the context of microarrays, this estimate is applicable for many multiple comparison situations.


2014 ◽  
Vol 38 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Daniel Furtado Ferreira

Sisvar is a statistical analysis system with a large usage by the scientific community to produce statistical analyses and to produce scientific results and conclusions. The large use of the statistical procedures of Sisvar by the scientific community is due to it being accurate, precise, simple and robust. With many options of analysis, Sisvar has a not so largely used analysis that is the multiple comparison procedures using bootstrap approaches. This paper aims to review this subject and to show some advantages of using Sisvar to perform such analysis to compare treatments means. Tests like Dunnett, Tukey, Student-Newman-Keuls and Scott-Knott are performed alternatively by bootstrap methods and show greater power and better controls of experimentwise type I error rates under non-normal, asymmetric, platykurtic or leptokurtic distributions.


Sign in / Sign up

Export Citation Format

Share Document