scholarly journals Design, analysis and reporting of multi-arm trials and strategies to address multiple testing

2020 ◽  
Vol 49 (3) ◽  
pp. 968-978
Author(s):  
Ayodele Odutayo ◽  
Dmitry Gryaznov ◽  
Bethan Copsey ◽  
Paul Monk ◽  
Benjamin Speich ◽  
...  

Abstract Background It is unclear how multiple treatment comparisons are managed in the analysis of multi-arm trials, particularly related to reducing type I (false positive) and type II errors (false negative). Methods We conducted a cohort study of clinical-trial protocols that were approved by research ethics committees in the UK, Switzerland, Germany and Canada in 2012. We examined the use of multiple-testing procedures to control the overall type I error rate. We created a decision tool to determine the need for multiple-testing procedures. We compared the result of the decision tool to the analysis plan in the protocol. We also compared the pre-specified analysis plans in trial protocols to their publications. Results Sixty-four protocols for multi-arm trials were identified, of which 50 involved multiple testing. Nine of 50 trials (18%) used a single-step multiple-testing procedures such as a Bonferroni correction and 17 (38%) used an ordered sequence of primary comparisons to control the overall type I error. Based on our decision tool, 45 of 50 protocols (90%) required use of a multiple-testing procedure but only 28 of the 45 (62%) accounted for multiplicity in their analysis or provided a rationale if no multiple-testing procedure was used. We identified 32 protocol–publication pairs, of which 8 planned a global-comparison test and 20 planned a multiple-testing procedure in their trial protocol. However, four of these eight trials (50%) did not use the global-comparison test. Likewise, 3 of the 20 trials (15%) did not perform the multiple-testing procedure in the publication. The sample size of our study was small and we did not have access to statistical-analysis plans for the included trials in our study. Conclusions Strategies to reduce type I and type II errors are inconsistently employed in multi-arm trials. Important analytical differences exist between planned analyses in clinical-trial protocols and subsequent publications, which may suggest selective reporting of analyses.

2004 ◽  
Vol 3 (1) ◽  
pp. 1-69 ◽  
Author(s):  
Sandrine Dudoit ◽  
Mark J. van der Laan ◽  
Katherine S. Pollard

The present article proposes general single-step multiple testing procedures for controlling Type I error rates defined as arbitrary parameters of the distribution of the number of Type I errors, such as the generalized family-wise error rate. A key feature of our approach is the test statistics null distribution (rather than data generating null distribution) used to derive cut-offs (i.e., rejection regions) for these test statistics and the resulting adjusted p-values. For general null hypotheses, corresponding to submodels for the data generating distribution, we identify an asymptotic domination condition for a null distribution under which single-step common-quantile and common-cut-off procedures asymptotically control the Type I error rate, for arbitrary data generating distributions, without the need for conditions such as subset pivotality. Inspired by this general characterization of a null distribution, we then propose as an explicit null distribution the asymptotic distribution of the vector of null value shifted and scaled test statistics. In the special case of family-wise error rate (FWER) control, our method yields the single-step minP and maxT procedures, based on minima of unadjusted p-values and maxima of test statistics, respectively, with the important distinction in the choice of null distribution. Single-step procedures based on consistent estimators of the null distribution are shown to also provide asymptotic control of the Type I error rate. A general bootstrap algorithm is supplied to conveniently obtain consistent estimators of the null distribution. The special cases of t- and F-statistics are discussed in detail. The companion articles focus on step-down multiple testing procedures for control of the FWER (van der Laan et al., 2004b) and on augmentations of FWER-controlling methods to control error rates such as tail probabilities for the number of false positives and for the proportion of false positives among the rejected hypotheses (van der Laan et al., 2004a). The proposed bootstrap multiple testing procedures are evaluated by a simulation study and applied to genomic data in the fourth article of the series (Pollard et al., 2004).


Author(s):  
Merrill D. Birkner ◽  
Sandra E. Sinisi ◽  
Mark J. van der Laan

Analysis of viral strand sequence data and viral replication capacity could potentially lead to biological insights regarding the replication ability of HIV-1. Determining specific target codons on the viral strand will facilitate the manufacturing of target-specific antiretrovirals. Various algorithmic and analysis techniques can be applied to this application. In this paper, we apply two techniques to a data set consisting of 317 patients, each with 282 sequenced protease and reverse transcriptase codons. The first application is recently developed multiple testing procedures to find codons which have significant univariate associations with the replication capacity of the virus. A single-step multiple testing procedure (Pollard and van der Laan 2003) method was used to control the family wise error rate (FWER) at the five percent alpha level as well as the application of augmentation multiple testing procedures to control the generalized family wise error (gFWER) or the tail probability of the proportion of false positives (TPPFP). We also applied a data adaptive multiple regression algorithm to obtain a prediction of viral replication capacity based on an entire mutant/non-mutant sequence profile. This is a loss-based, cross-validated Deletion/Substitution/Addition regression algorithm (Sinisi and van der Laan 2004), which builds candidate estimators in the prediction of a univariate outcome by minimizing an empirical risk. These methods are two separate techniques with distinct goals used to analyze this structure of viral data.


2016 ◽  
Vol 12 (1) ◽  
pp. 21-29 ◽  
Author(s):  
Daniel B. Rubin

Abstract The Optimal Discovery Procedure (ODP) is a method for simultaneous hypothesis testing that attempts to gain power relative to more standard techniques by exploiting multivariate structure [1]. Specializing to the example of testing whether components of a Gaussian mean vector are zero, we compare the power of the ODP to a Bonferroni-style method and to the Benjamini-Hochberg method when the testing procedures aim to respectively control certain Type I error rate measures, such as the expected number of false positives or the false discovery rate. We show through theoretical results, numerical comparisons, and two microarray examples that when the rejection regions for the ODP test statistics are chosen such that the procedure is guaranteed to uniformly control a Type I error rate measure, the technique is generally less powerful than competing methods. We contrast and explain these results in light of previously proven optimality theory for the ODP. We also compare the ordering given by the ODP test statistics to the standard rankings based on sorting univariate p-values from smallest to largest. In the cases we considered the standard ordering was superior, and ODP rankings were adversely impacted by correlation.


2005 ◽  
Vol 44 (03) ◽  
pp. 431-437 ◽  
Author(s):  
J. Landgrebe ◽  
E. Brunner ◽  
F. Bretz

Summary Objectives: Discussion of different error concepts relevant to microarray experiments. Review of some commonly used multiple testing procedures. Comparison of different approaches as applied to gene expression data. Methods: This article focuses on familywise error rate (FWER) and false discovery rate (FDR) controlling procedures. Methods under investigation include: Bonferroni-type methods and their improvements (including resampling approaches), modified Bonferroni methods, data-driven approaches, as well as the linear step-up method and its modifications. Particular emphasis lies on the description of the assumptions, advantages and limitations for the investigated methods. Results: FWER controlling procedures are often too conservative in high dimensional screening studies. A better balance between the raw P-values and the stringent FWER-adjusted P-values may be required in many situations, as provided by FDR controlling and related procedures. Conclusions: The questions remain open, which error concept to apply and which multiple testing procedure to use. Although we believe that the FDR or one of its variants will be applied more often in the future, longterm experience with microarray technology is missing and thus the validity of appropriate multiple test procedures cannot yet be assessed for microarray data analysis.


Author(s):  
Damian Clarke ◽  
Joseph P. Romano ◽  
Michael Wolf

When considering multiple-hypothesis tests simultaneously, standard statistical techniques will lead to overrejection of null hypotheses unless the multiplicity of the testing framework is explicitly considered. In this article, we discuss the Romano–Wolf multiple-hypothesis correction and document its implementation in Stata. The Romano–Wolf correction (asymptotically) controls the familywise error rate, that is, the probability of rejecting at least one true null hypothesis among a family of hypotheses under test. This correction is considerably more powerful than earlier multiple-testing procedures, such as the Bonferroni and Holm corrections, given that it takes into account the dependence structure of the test statistics by resampling from the original data. We describe a command, rwolf, that implements this correction and provide several examples based on a wide range of models. We document and discuss the performance gains from using rwolf over other multiple-testing procedures that control the familywise error rate.


2015 ◽  
Vol 14 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Rosa J. Meijer ◽  
Thijmen J.P. Krebs ◽  
Jelle J. Goeman

AbstractWe present a multiple testing method for hypotheses that are ordered in space or time. Given such hypotheses, the elementary hypotheses as well as regions of consecutive hypotheses are of interest. These region hypotheses not only have intrinsic meaning but testing them also has the advantage that (potentially small) signals across a region are combined in one test. Because the expected number and length of potentially interesting regions are usually not available beforehand, we propose a method that tests all possible region hypotheses as well as all individual hypotheses in a single multiple testing procedure that controls the familywise error rate. We start at testing the global null-hypothesis and when this hypothesis can be rejected we continue with further specifying the exact location/locations of the effect present. The method is implemented in the


2021 ◽  
Vol 18 (5) ◽  
pp. 521-528
Author(s):  
Eric S Leifer ◽  
James F Troendle ◽  
Alexis Kolecki ◽  
Dean A Follmann

Background/aims: The two-by-two factorial design randomizes participants to receive treatment A alone, treatment B alone, both treatments A and B( AB), or neither treatment ( C). When the combined effect of A and B is less than the sum of the A and B effects, called a subadditive interaction, there can be low power to detect the A effect using an overall test, that is, factorial analysis, which compares the A and AB groups to the C and B groups. Such an interaction may have occurred in the Action to Control Cardiovascular Risk in Diabetes blood pressure trial (ACCORD BP) which simultaneously randomized participants to receive intensive or standard blood pressure, control and intensive or standard glycemic control. For the primary outcome of major cardiovascular event, the overall test for efficacy of intensive blood pressure control was nonsignificant. In such an instance, simple effect tests of A versus C and B versus C may be useful since they are not affected by a subadditive interaction, but they can have lower power since they use half the participants of the overall trial. We investigate multiple testing procedures which exploit the overall tests’ sample size advantage and the simple tests’ robustness to a potential interaction. Methods: In the time-to-event setting, we use the stratified and ordinary logrank statistics’ asymptotic means to calculate the power of the overall and simple tests under various scenarios. We consider the A and B research questions to be unrelated and allocate 0.05 significance level to each. For each question, we investigate three multiple testing procedures which allocate the type 1 error in different proportions for the overall and simple effects as well as the AB effect. The Equal Allocation 3 procedure allocates equal type 1 error to each of the three effects, the Proportional Allocation 2 procedure allocates 2/3 of the type 1 error to the overall A (respectively, B) effect and the remaining type 1 error to the AB effect, and the Equal Allocation 2 procedure allocates equal amounts to the simple A (respectively, B) and AB effects. These procedures are applied to ACCORD BP. Results: Across various scenarios, Equal Allocation 3 had robust power for detecting a true effect. For ACCORD BP, all three procedures would have detected a benefit of intensive glycemia control. Conclusions: When there is no interaction, Equal Allocation 3 has less power than a factorial analysis. However, Equal Allocation 3 often has greater power when there is an interaction. The R package factorial2x2 can be used to explore the power gain or loss for different scenarios.


Sign in / Sign up

Export Citation Format

Share Document