scholarly journals Assessing meta-regression methods for examining moderator relationships with dependent effect sizes: A Monte Carlo simulation

2017 ◽  
Vol 8 (4) ◽  
pp. 435-450 ◽  
Author(s):  
José Antonio López-López ◽  
Wim Van den Noortgate ◽  
Emily E. Tanner-Smith ◽  
Sandra Jo Wilson ◽  
Mark W. Lipsey
2021 ◽  
Vol 3 (1) ◽  
pp. 61-89
Author(s):  
Stefan Geiß

Abstract This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.


2012 ◽  
Vol 45 (2) ◽  
pp. 576-594 ◽  
Author(s):  
Wim Van den Noortgate ◽  
José Antonio López-López ◽  
Fulgencio Marín-Martínez ◽  
Julio Sánchez-Meca

1999 ◽  
Vol 2 ◽  
pp. 32-38 ◽  
Author(s):  
Fulgencio Marín-Martínez ◽  
Julio Sánchez-Meca

When a primary study includes several indicators of the same construct, the usual strategy to meta-analytically integrate the multiple effect sizes is to average them within the study. In this paper, the numerical and conceptual differences among three procedures for averaging dependent effect sizes are shown. The procedures are the simple arithmetic mean, the Hedges and Olkin (1985) procedure, and the Rosenthal and Rubin (1986) procedure. Whereas the simple arithmetic mean ignores the dependence among effect sizes, both the procedures by Hedges and Olkin and Rosenthal and Rubin take into account the correlational structure of the effect sizes, although in a different way. Rosenthal and Rubin's procedure provides the effect size for a single composite variable made up of the multiple effect sizes, whereas Hedges and Olkin's procedure presents an effect size estimate of the standard variable. The three procedures were applied to 54 conditions, where the magnitude and homogeneity of both effect sizes and correlation matrix among effect sizes were manipulated. Rosenthal and Rubin's procedure showed the highest estimates, followed by the simple mean, and the Hedges and Olkin procedure, this last having the lowest estimates. These differences are not trivial in a meta-analysis, where the aims must guide the selection of one of the procedures.


2019 ◽  
Author(s):  
Melissa Angelina Rodgers ◽  
James E Pustejovsky

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the Trim & Fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger’s regression test that incorporate robust variance estimation (RVE) or multi-level meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.


2021 ◽  
Vol 12 ◽  
Author(s):  
Valeria Sebri ◽  
Ilaria Durosini ◽  
Stefano Triberti ◽  
Gabriella Pravettoni

The experience of breast cancer and related treatments has notable effects on women's mental health. Among them, the subjective perception of the body or body image (BI) is altered. Such alterations deserve to be properly treated because they augment the risk for depression and mood disorders, and impair intimate relationships. A number of studies revealed that focused psychological interventions are effective in reducing BI issues related to breast cancer. However, findings are inconsistent regarding the dimension of such effects. This meta-analysis synthesizes and quantifies the efficacy of psychological interventions for BI in breast cancer patients and survivors. Additionally, since sexual functioning emerged as a relevant aspect in the BI distortions, we explored the efficacy of psychological interventions on sexual functioning related to BI in breast cancer patients and survivors. The literature search for relevant contributions was carried out in March 2020 through the following electronic databases: Scopus, PsycINFO, and ProQUEST. Only articles available in English and that featured psychological interventions for body image in breast cancer patients or survivors with controls were included. Seven articles with 17 dependent effect sizes were selected for this meta-analysis. Variables were grouped into: Body Image (six studies, nine dependent effect sizes) and Sexual Functioning Related to the Body Image in breast cancer patients and survivors (four studies, eight dependent effect sizes). The three-level meta-analysis showed a statistically significant effect for Body Image [g = 0.50; 95% CI (0.08; 0.93); p < 0.05] but no significant results for Sexual Functioning Related to Body Image [g = 0.33; 95% CI (−0.20; 0.85); p = 0.19]. These results suggest that psychological interventions are effective in reducing body image issues but not in reducing sexual functioning issues related to body image in breast cancer patients and survivors. Future review efforts may include gray literature and qualitative studies to better understand body image and sexual functioning issues in breast cancer patients. Also, high-quality studies are needed to inform future meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document