A standardized mean difference effect size for single case designs

2012 ◽  
Vol 3 (3) ◽  
pp. 224-239 ◽  
Author(s):  
Larry V. Hedges ◽  
James E. Pustejovsky ◽  
William R. Shadish
2018 ◽  
Author(s):  
James E Pustejovsky

A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap of all pairs. Currently, little is known about the properties of these indices when applied to behavioral data collected by systematic direct observation, even though systematic direct observation is the most common method for outcome measurement in single-case research. This study uses Monte Carlo simulation to investigate the properties of several widely used single-case effect size measures when applied to systematic direct observation data. Results indicate that the magnitude of the non-overlap measures and of the standardized mean difference can be strongly influenced by procedural details of the study's design, which is a significant limitation to using these indices as effect sizes for meta-analysis of single-case designs. A less widely used parametric index, the log-response ratio, has the advantage of being insensitive to sample size and observation session length, although its magnitude is influenced by the use of partial interval recording.


2013 ◽  
Vol 4 (4) ◽  
pp. 324-341 ◽  
Author(s):  
Larry V. Hedges ◽  
James E. Pustejovsky ◽  
William R. Shadish

2014 ◽  
Vol 52 (2) ◽  
pp. 213-230 ◽  
Author(s):  
Hariharan Swaminathan ◽  
H. Jane Rogers ◽  
Robert H. Horner

Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 49-58 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas ◽  
David Leiva

Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm, and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, and changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.


Sign in / Sign up

Export Citation Format

Share Document