Randomization procedures in single‐case intervention research contexts: (Some of) “the rest of the story”

2019 ◽  
Vol 112 (3) ◽  
pp. 334-348 ◽  
Author(s):  
Joel R. Levin ◽  
Thomas R. Kratochwill ◽  
John M. Ferron
2017 ◽  
Vol 39 (2) ◽  
pp. 118-128 ◽  
Author(s):  
Matt Tincani ◽  
Jason Travers

Demonstration of experimental control is considered a hallmark of high-quality single-case research design (SCRD). Studies that fail to demonstrate experimental control may not be published because researchers are unwilling to submit these papers for publication and journals are unlikely to publish negative results (i.e., the file drawer effect). SCRD studies comprise a large proportion of intervention research in special education. Consequently, the existing body of research, comprised mainly of studies that show experimental control, may artificially inflate efficacy of interventions. We discuss how experimental control evolved as the standard for high-quality SCRD; why, in the era of evidence-based practice, rigorous studies that fail to fully demonstrate experimental control are important to include in the body of published intervention research; the role of non-replication studies in discovering intervention boundaries; and considerations for researchers who wish to conduct and appraise studies that fail to yield full experimental control.


2017 ◽  
Vol 39 (2) ◽  
pp. 67-76 ◽  
Author(s):  
Thomas R. Kratochwill ◽  
Joel R. Levin ◽  
Robert H. Horner

The central roles of science in the field of remedial and special education are to (a) identify basic laws of nature and (b) apply those laws in the design of practices that achieve socially valued outcomes. The scientific process is designed to allow demonstration of specific (typically positive) outcomes, and to assist in the attribution of those outcomes to controlled variables. Although growing recognition is being given to the importance of replication in this process, equal consideration should be given to the function of publishing studies that document negative (or null) results. In this manuscript, we outline the features of negative results in educational and psychological single-case intervention research. We also discuss the assessment, methodological, and statistical dimensions of negative results that should be considered when reporting negative results. The importance of replication studies (direct, systematic, and clinical) is also discussed within the context of negative-results research.


2013 ◽  
Vol 7 (10) ◽  
pp. 1257-1264 ◽  
Author(s):  
Kim Bulkeley ◽  
Anita Bundy ◽  
Jacqueline Roberts ◽  
Stewart Einfeld

2012 ◽  
Vol 34 (1) ◽  
pp. 26-38 ◽  
Author(s):  
Thomas R. Kratochwill ◽  
John H. Hitchcock ◽  
Robert H. Horner ◽  
Joel R. Levin ◽  
Samuel L. Odom ◽  
...  

Author(s):  
Daniel H. Robinson ◽  
Joel R. Levin ◽  
Steve Graham ◽  
Gregory Schraw ◽  
Lynn Fuchs ◽  
...  

This article discusses various forms of research that are contemporaneously being undertaken for either investigating or establishing the efficacy of educational interventions, along with their strengths and limitations. It first explains what “credible” educational intervention research means, taking into account the importance of causal inference in intervention research methodologies, before turning to single-case intervention designs and how they can be profitably applied in a number of educational and psychological intervention research contexts. It then describes randomization as a means to enhance the scientific credibility of single-case intervention research and how theory can make intervention research more credible. Finally, it offers recommendations for conducting, analyzing, and reporting educational intervention research, with an eye toward improving its quality and associated credibility.


2016 ◽  
Vol 37 (4) ◽  
pp. 195-204 ◽  
Author(s):  
Jason C. Travers ◽  
Bryan G. Cook ◽  
William J. Therrien ◽  
Michael D. Coyne

Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education intervention research, including emphasis on quantity of publications, esteem for novel findings, and barriers to publishing high-quality studies with null or negative effects. This article introduces the special issue on replication of special education intervention research by first providing an overview of concepts and issues related to replication. Specific attention is then given to replication as it relates to group design and single case experimental design research, two prominent albeit philosophically different empirical methodologies. We then briefly describe how replications using these research designs can be conducted in complementary ways to better understand intervention effects and advance evidence-based practices in special education.


2017 ◽  
Vol 41 (3) ◽  
pp. 187-197 ◽  
Author(s):  
Michele A. Lobo ◽  
Mariola Moeyaert ◽  
Andrea Baraldi Cunha ◽  
Iryna Babik

Sign in / Sign up

Export Citation Format

Share Document