Supplemental Material for Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

Stats ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 174-188
Author(s):  
Yoshifumi Ukyo ◽  
Hisashi Noma ◽  
Kazushi Maruo ◽  
Masahiko Gosho

The mixed-effects model for repeated measures (MMRM) approach has been widely applied for longitudinal clinical trials. Many of the standard inference methods of MMRM could possibly lead to the inflation of type I error rates for the tests of treatment effect, when the longitudinal dataset is small and involves missing measurements. We propose two improved inference methods for the MMRM analyses, (1) the Bartlett correction with the adjustment term approximated by bootstrap, and (2) the Monte Carlo test using an estimated null distribution by bootstrap. These methods can be implemented regardless of model complexity and missing patterns via a unified computational framework. Through simulation studies, the proposed methods maintain the type I error rate properly, even for small and incomplete longitudinal clinical trial settings. Applications to a postnatal depression clinical trial are also presented.


2021 ◽  
Author(s):  
Josue E. Rodriguez ◽  
Donald Ray Williams ◽  
Paul - Christian Bürkner

Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be drastic ramifications to upholding this assumption. We propose that researchers should instead assume unequal between-study variances by default. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variances. In two extensive simulation studies, we show that in terms of Type I error and statistical power, nearly nothing is lost by using the MELSM for moderator tests, but there can be serious costs when a mixed-effects model with equal variances is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the mixed-effects model and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the mixed-effects model can be grossly inflated or overly conservative, whereas the MELSM excellently controlled the Type I error across all scenarios. With respect to power, the MELSM had comparable or higher power than the mixed-effects model in all conditions where the latter produced valid (i.e., not inflated) Type 1 error rates. Altogether, our results strongly support that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators


2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.


2021 ◽  
Vol 48 (1) ◽  
pp. 51-77 ◽  
Author(s):  
Natalie M. Nielsen ◽  
Wouter A. C. Smink ◽  
Jean-Paul Fox

AbstractThe linear mixed effects model is an often used tool for the analysis of multilevel data. However, this model has an ill-understood shortcoming: it assumes that observations within clusters are always positively correlated. This assumption is not always true: individuals competing in a cluster for scarce resources are negatively correlated. Random effects in a mixed effects model can model a positive correlation among clustered observations but not a negative correlation. As negative clustering effects are largely unknown to the sheer majority of the research community, we conducted a simulation study to detail the bias that occurs when analysing negative clustering effects with the linear mixed effects model. We also demonstrate that ignoring a small negative correlation leads to deflated Type-I errors, invalid standard errors and confidence intervals in regression analysis. When negative clustering effects are ignored, mixed effects models incorrectly assume that observations are independently distributed. We highlight the importance of understanding these phenomena through analysis of the data from Lamers, Bohlmeijer, Korte, and Westerhof (2015). We conclude with a reflection on well-known multilevel modelling rules when dealing with negative dependencies in a cluster: negative clustering effects can, do and will occur and these effects cannot be ignored.


2018 ◽  
Author(s):  
Van Rynald T Liceralde ◽  
Peter C. Gordon

Power transforms have been increasingly used in linear mixed-effects models (LMMs) of chronometric data (e.g., response times [RTs]) as a statistical solution to preempt violating the assumption of residual normality. However, differences in results between LMMs fit to raw RTs and transformed RTs have reignited discussions on issues concerning the transformation of RTs. Here, we analyzed three word-recognition megastudies and performed Monte Carlo simulations to better understand the consequences of transforming RTs in LMMs. Within each megastudy, transforming RTs produced different fixed- and random-effect patterns; across the megastudies, RTs were optimally normalized by different power transforms, and results were more consistent among LMMs fit to raw RTs. Moreover, the simulations showed that LMMs fit to optimally normalized RTs had greater power for main effects in smaller samples, but that LMMs fit to raw RTs had greater power for interaction effects as sample sizes increased, with negligible differences in Type I error rates between the two models. Based on these results, LMMs should be fit to raw RTs when there is no compelling reason beyond nonnormality to transform RTs and when the interpretive framework mapping the predictors and RTs treats RT as an interval scale.


2017 ◽  
Vol 28 (3) ◽  
pp. 801-821
Author(s):  
Thomas O Jemielita ◽  
Mary E Putt ◽  
Devan V Mehrotra

Incomplete block crossover trials with period-specific baseline and post-baseline (outcome) measures for each subject are often used in clinical drug development; without loss of generality, we focus on the three-treatment two-period ([Formula: see text]) crossover. Data from such trials are commonly analyzed using a mixed effects model with indicator terms for treatment and period, and an unstructured covariance matrix for the vector of intra-subject measurements. It is well-known that treatment effect estimates from this analysis are complex functions of both within-subject and between-subject treatment contrasts. We caution that the associated type I error rate and power for hypothesis testing can be non-trivially influenced by how the baselines are utilized. Specifically, the mixed effects analysis which uses change from baseline as the dependent variable is shown to consistently underperform corresponding analyses in which the outcome is the dependent variable and linear combinations of the baselines are used as period-specific and/or period-invariant covariates. A simpler fixed effects analysis of covariance involving only within-subject contrasts is also described for small sample situations in which the mixed effects analyses can suffer from increased type I error rates. Theoretical insights, simulation results and an illustrative example with real data are used to develop the main points.


2021 ◽  
Author(s):  
Johannes Oberpriller ◽  
Melina de Souza Leite ◽  
Maximilian Pichler

Biological data are often intrinsically hierarchical. Due to their ability to account for such dependencies, mixed-effect models have become a common analysis technique in ecology and evolution. While many questions around their theoretical foundations and practical applications are solved, one fundamental question is still highly debated: When having a low number of levels should we model a grouping variable as a random or fixed effect? In such situation, the variance of the random effect is presumably underestimated, but whether this affects the statistical properties of the fixed effects is unclear. Here, we analyze the consequences of including a grouping variable as fixed or random effect and possible other modeling options (over and underspecified models) for data with small number of levels in the grouping variable (2 - 8). For all models, we calculated type I error rates, power and coverage. Moreover, we show the influence of possible study designs on these statistical properties. We found that mixed-effect models already for two groups correctly estimate variance for two groups. Moreover, model choice does not influence the statistical properties when there is no random slope in the data-generating process. However, if an ecological effect differs among groups, using a random slope and intercept model, and switching to a fixed-effect model only in case of a singular fit avoids overconfidence in the results. Additionally, power and type I error are strongly influenced by the number of and the difference between groups. We conclude that inferring the correct random effect structure is of high importance to get correct statistical properties. When in doubt, we recommend starting with the simpler model and using model diagnostics to identify missing components. When having identified the correct structure, we encourage to start with a mixed-effects model independent of the number of groups and only in case of a singular fit switch to a fixed-effect model. With these recommendations, we allow for more informative choices about study design and data analysis and thus make ecological inference with mixed-effects models more robust for low number of groups.


2000 ◽  
Vol 14 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Joni Kettunen ◽  
Niklas Ravaja ◽  
Liisa Keltikangas-Järvinen

Abstract We examined the use of smoothing to enhance the detection of response coupling from the activity of different response systems. Three different types of moving average smoothers were applied to both simulated interbeat interval (IBI) and electrodermal activity (EDA) time series and to empirical IBI, EDA, and facial electromyography time series. The results indicated that progressive smoothing increased the efficiency of the detection of response coupling but did not increase the probability of Type I error. The power of the smoothing methods depended on the response characteristics. The benefits and use of the smoothing methods to extract information from psychophysiological time series are discussed.


Sign in / Sign up

Export Citation Format

Share Document