scholarly journals Optimal sample size calculation for null hypothesis significance tests

2019 ◽  
Author(s):  
Joseph F. Mudge ◽  
Jeffrey E. Houlahan

AbstractTraditional study design tools for estimating appropriate sample sizes are not consistently used in ecology and can lead to low statistical power to detect biologically relevant effects. We have developed a new approach to estimating optimal sample sizes, requiring only three parameters; a maximum acceptable average of α and β, a critical effect size of minimum biological relevance, and an estimate of the relative costs of Type I vs. Type II errors.This approach can be used to show the general circumstances under which different combinations of critical effect sizes and maximum acceptable combinations of α and β are attainable for different statistical tests. The optimal α sample size estimation approach can require fewer samples than traditional sample size estimation methods when costs of Type I and II errors are assumed to be equal but recommends comparatively more samples for increasingly unequal Type I vs. Type II errors costs. When sampling costs and absolute costs of Type I and II errors are known, optimal sample size estimation can be used to determine the smallest sample size at which the cost of an additional sample outweighs its associated reduction in errors. Optimal sample size estimation constitutes a more flexible and intuitive tool than traditional sample size estimation approaches, given the constraints and unknowns commonly faced by ecologists during study.

2020 ◽  
Vol 29 (10) ◽  
pp. 2958-2971 ◽  
Author(s):  
Maria Stark ◽  
Antonia Zapf

Introduction In a confirmatory diagnostic accuracy study, sensitivity and specificity are considered as co-primary endpoints. For the sample size calculation, the prevalence of the target population must be taken into account to obtain a representative sample. In this context, a general problem arises. With a low or high prevalence, the study may be overpowered in one subpopulation. One further issue is the correct pre-specification of the true prevalence. With an incorrect assumption about the prevalence, an over- or underestimated sample size will result. Methods To obtain the desired power independent of the prevalence, a method for an optimal sample size calculation for the comparison of a diagnostic experimental test with a prespecified minimum sensitivity and specificity is proposed. To face the problem of an incorrectly pre-specified prevalence, a blinded one-time re-estimation design of the sample size based on the prevalence and a blinded repeated re-estimation design of the sample size based on the prevalence are evaluated by a simulation study. Both designs are compared to a fixed design and additionally among each other. Results The type I error rates of both blinded re-estimation designs are not inflated. Their empirical overall power equals the desired theoretical power and both designs offer unbiased estimates of the prevalence. The repeated re-estimation design reveals no advantages concerning the mean squared error of the re-estimated prevalence or sample size compared to the one-time re-estimation design. The appropriate size of the internal pilot study in the one-time re-estimation design is 50% of the initially calculated sample size. Conclusions A one-time re-estimation design of the prevalence based on the optimal sample size calculation is recommended in single-arm diagnostic accuracy studies.


2017 ◽  
Vol 28 (1) ◽  
pp. 117-133 ◽  
Author(s):  
Thomas Asendorf ◽  
Robin Henderson ◽  
Heinz Schmidli ◽  
Tim Friede

We consider modelling and inference as well as sample size estimation and reestimation for clinical trials with longitudinal count data as outcomes. Our approach is general but is rooted in design and analysis of multiple sclerosis trials where lesion counts obtained by magnetic resonance imaging are important endpoints. We adopt a binomial thinning model that allows for correlated counts with marginal Poisson or negative binomial distributions. Methods for sample size planning and blinded sample size reestimation for randomised controlled clinical trials with such outcomes are developed. The models and approaches are applicable to data with incomplete observations. A simulation study is conducted to assess the effectiveness of sample size estimation and blinded sample size reestimation methods. Sample sizes attained through these procedures are shown to maintain the desired study power without inflating the type I error. Data from a recent trial in patients with secondary progressive multiple sclerosis illustrate the modelling approach.


2020 ◽  
Vol 14 (1) ◽  
pp. 41-54
Author(s):  
Elham Basiri ◽  
Seyed Mahdi Salehi ◽  
◽  

2020 ◽  
Vol 40 (6) ◽  
pp. 797-814
Author(s):  
Michael Fairley ◽  
Lauren E. Cipriano ◽  
Jeremy D. Goldhaber-Fiebert

Purpose. Health economic evaluations that include the expected value of sample information support implementation decisions as well as decisions about further research. However, just as decision makers must consider portfolios of implementation spending, they must also identify the optimal portfolio of research investments. Methods. Under a fixed research budget, a decision maker determines which studies to fund; additional budget allocated to one study to increase the study sample size implies less budget available to collect information to reduce decision uncertainty in other implementation decisions. We employ a budget-constrained portfolio optimization framework in which the decisions are whether to invest in a study and at what sample size. The objective is to maximize the sum of the studies’ population expected net benefit of sampling (ENBS). We show how to determine the optimal research portfolio and study-specific levels of investment. We demonstrate our framework with a stylized example to illustrate solution features and a real-world application using 6 published cost-effectiveness analyses. Results. Among the studies selected for nonzero investment, the optimal sample size occurs at the point at which the marginal population ENBS divided by the marginal cost of additional sampling is the same for all studies. Compared with standard ENBS optimization without a research budget constraint, optimal budget-constrained sample sizes are typically smaller but allow more studies to be funded. Conclusions. The budget constraint for research studies directly implies that the optimal sample size for additional research is not the point at which the ENBS is maximized for individual studies. A portfolio optimization approach can yield higher total ENBS. Ultimately, there is a maximum willingness to pay for incremental information that determines optimal sample sizes.


2015 ◽  
Vol 9 (2) ◽  
pp. 1822-1833
Author(s):  
Murat DoÄŸan

In this study, Monte Carlo simulation is used to evaluate the characteristics of CFA fit indices under different conditions (such as sample size, estimation method and distributional conditions). The simulation study was performed using seven different samples where sample has a different sample size such as 50, 100, 200, 400, 800, 1600, 4000, four different estimation methods (Maximum Likelihood, Generalized Least Square, Least Square and Weighted Least Square) and three distribution conditions (normal, slightly non-normal and moderately non-normal). A simulation study was conducted with EQS software to examine the effect of these conditions on the most common eleven fit indices that are studied in CFA and SEM. As a result of this study, all of the factors studied are shown to have an influence on the fit indices.


Sign in / Sign up

Export Citation Format

Share Document