equivalence trials
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 2)

H-INDEX

20
(FIVE YEARS 0)

2021 ◽  
pp. 096228022098857
Author(s):  
Yongqiang Tang

Log-rank tests have been widely used to compare two survival curves in biomedical research. We describe a unified approach to power and sample size calculation for the unweighted and weighted log-rank tests in superiority, noninferiority and equivalence trials. It is suitable for both time-driven and event-driven trials. A numerical algorithm is suggested. It allows flexible specification of the patient accrual distribution, baseline hazards, and proportional or nonproportional hazards patterns, and enables efficient sample size calculation when there are a range of choices for the patient accrual pattern and trial duration. A confidence interval method is proposed for the trial duration of an event-driven trial. We point out potential issues with several popular sample size formulae. Under proportional hazards, the power of a survival trial is commonly believed to be determined by the number of observed events. The belief is roughly valid for noninferiority and equivalence trials with similar survival and censoring distributions between two groups, and for superiority trials with balanced group sizes. In unbalanced superiority trials, the power depends also on other factors such as data maturity. Surprisingly, the log-rank test usually yields slightly higher power than the Wald test from the Cox model under proportional hazards in simulations. We consider various nonproportional hazards patterns induced by delayed effects, cure fractions, and/or treatment switching. Explicit power formulae are derived for the combination test that takes the maximum of two or more weighted log-rank tests to handle uncertain nonproportional hazards patterns. Numerical examples are presented for illustration.


2020 ◽  
Vol 131 (1) ◽  
pp. 208-209
Author(s):  
Patrick Schober ◽  
Thomas R. Vetter

2020 ◽  
pp. 1
Author(s):  
Andrew J. Hughes ◽  
Hugo C. Temperley ◽  
Daniel P. Ahern ◽  
Jake McDonnell ◽  
Joseph S. Butler

2019 ◽  
Vol 25 (4) ◽  
pp. 143-144
Author(s):  
Kevin Riggs ◽  
Joshua Richman ◽  
Stefan Kertesz

High-quality research demonstrating a lack of effectiveness may facilitate the ‘de-adoption’ of ineffective health services. However, there has been little debate on the optimal design for ineffectiveness research—studies exploring the research hypothesis that an intervention is ineffective. The aim of this study was to explore investigators’ preferences for trial design for ineffectiveness research. We conducted a mixed-methods online survey with principle investigators identified from clinicaltrials.gov. A vignette described researchers planning a trial to test a widely used intervention they hypothesised was ineffective. One multiple-choice question asked whether a superiority trial or equivalence trial design was favoured, and one free-response question asked about the reasons for that choice. Free-response answers were analysed using content analysis to identify related reasons. 139 participants completed the survey (completion rate 37.5%). Overall, 56.8% favoured superiority trials, 27.3% favoured equivalence trials and 15.8% were unsure. Reasons identified for favouring superiority trials were: (1) evidence of superiority should be required to justify active treatment, (2) superiority trials are more familiar, (3) placebo should not be the comparator in equivalence trials and (4) superiority trials require smaller sample sizes. Reasons identified for favouring equivalence trials were: (1) negative superiority trials represent a lack of evidence of effectiveness, not evidence of ineffectiveness and (2) the research hypothesis should not be the same as the null hypothesis. A minority of experienced researchers favour equivalence trials for ineffectiveness research, and misconceptions and lack of familiarity with equivalence trials may be contributing factors.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 36
Author(s):  
Gerben ter Riet ◽  
Bram W.C. Storosum ◽  
Aeilko H. Zwinderman
Keyword(s):  

The debate on reproducibility in biomedicine will gain precision only if we agree what reproducibility means. Importantly, reproducibility should be distinguished from validity (“truth”). We propose the application of an equivalence trials framework to clarify the concept of reproducibility by changing the (narrow) equivalence zone around a zero difference by a zone of reproducibility around (a) previous finding(s).


Sign in / Sign up

Export Citation Format

Share Document