Optimal Sample Allocation Under Unequal Costs in Cluster-Randomized Trials

2020 ◽  
Vol 45 (4) ◽  
pp. 446-474
Author(s):  
Zuchao Shen ◽  
Benjamin Kelcey

Conventional optimal design frameworks consider a narrow range of sampling cost structures that thereby constrict their capacity to identify the most powerful and efficient designs. We relax several constraints of previous optimal design frameworks by allowing for variable sampling costs in cluster-randomized trials. The proposed framework introduces additional design considerations and has the potential to identify designs with more statistical power, even when some parameters are constrained due to immutable practical concerns. The results also suggest that the gains in efficiency introduced through the expanded framework are fairly robust to misspecifications of the expanded cost structure and concomitant design parameters (e.g., intraclass correlation coefficient). The proposed framework is implemented in the R package odr.

AERA Open ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 233285842093952
Author(s):  
Qi Zhang ◽  
Jessaca Spybrook ◽  
Fatih Unlu

With the increasing demand for evidence-based research on teacher effectiveness and improving student achievement, more impact studies are being conducted to examine the effectiveness of professional development (PD) interventions. Cluster randomized trials (CRTs) are often carried out to assess PD interventions that aim to improve both teacher and student outcomes. Due to the different design parameters (i.e., intraclass correlation and R2) and benchmark effect sizes associated with the student and teacher outcomes, two power analyses are necessary for planning CRTs that aim to detect both teacher and student effects in one study. These two power analyses are often conducted separately without considering how design choices to power the study to detect student effects may affect design choices to power the study to detect teacher effects and vice versa. In this study, we consider strategies to maximize the efficiency of the study design when both student and teacher effects are of primary interest.


Methodology ◽  
2021 ◽  
Vol 17 (2) ◽  
pp. 92-110
Author(s):  
Nianbo Dong ◽  
Jessaca Spybrook ◽  
Benjamin Kelcey ◽  
Metin Bulus

Researchers often apply moderation analyses to examine whether the effects of an intervention differ conditional on individual or cluster moderator variables such as gender, pretest, or school size. This study develops formulas for power analyses to detect moderator effects in two-level cluster randomized trials (CRTs) using hierarchical linear models. We derive the formulas for estimating statistical power, minimum detectable effect size difference and 95% confidence intervals for cluster- and individual-level moderators. Our framework accommodates binary or continuous moderators, designs with or without covariates, and effects of individual-level moderators that vary randomly or nonrandomly across clusters. A small Monte Carlo simulation confirms the accuracy of our formulas. We also compare power between main effect analysis and moderation analysis, discuss the effects of mis-specification of the moderator slope (randomly vs. non-randomly varying), and conclude with directions for future research. We provide software for conducting a power analysis of moderator effects in CRTs.


Sign in / Sign up

Export Citation Format

Share Document