replication failure
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 3)

H-INDEX

7
(FIVE YEARS 0)

Linguistics ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jack Grieve

Abstract In this paper, I propose that replication failure in linguistics may be due primarily to inherent issues with the application of experimental methods to analyze an inextricably social phenomenon like language, as opposed to poor research practices. Because language use varies across social contexts, and because social context must vary across independent experimental replications, linguists should not be surprised when experimental results fail to replicate at the expected rate. To address issues with replication failure in linguistics, and to increase methodological rigor in our field more generally, I argue that linguists must use experimental methods carefully, keeping in mind their inherent limitations, while acknowledging the scientific value of observational methods, which are often the only way to pursue basic questions in our field.


2021 ◽  
Vol 7 (21) ◽  
pp. eabd1705
Author(s):  
Marta Serra-Garcia ◽  
Uri Gneezy

We use publicly available data to show that published papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that replicate. This difference in citation does not change after the publication of the failure to replicate. Only 12% of postreplication citations of nonreplicable findings acknowledge the replication failure. Existing evidence also shows that experts predict well which papers will be replicated. Given this prediction, why are nonreplicable papers accepted for publication in the first place? A possible answer is that the review team faces a trade-off. When the results are more “interesting,” they apply lower standards regarding their reproducibility.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Jacob M. Schauer ◽  
Kaitlyn G. Fitzgerald ◽  
Sarah Peko-Spicer ◽  
Mena C. R. Whalen ◽  
Rrita Zejnullahi ◽  
...  

2020 ◽  
Vol 24 (4) ◽  
pp. 316-344
Author(s):  
Leandre R. Fabrigar ◽  
Duane T. Wegener ◽  
Richard E. Petty

In recent years, psychology has wrestled with the broader implications of disappointing rates of replication of previously demonstrated effects. This article proposes that many aspects of this pattern of results can be understood within the classic framework of four proposed forms of validity: statistical conclusion validity, internal validity, construct validity, and external validity. The article explains the conceptual logic for how differences in each type of validity across an original study and a subsequent replication attempt can lead to replication “failure.” Existing themes in the replication literature related to each type of validity are also highlighted. Furthermore, empirical evidence is considered for the role of each type of validity in non-replication. The article concludes with a discussion of broader implications of this classic validity framework for improving replication rates in psychological research.


2020 ◽  
Vol 20 (3) ◽  
pp. 129-139
Author(s):  
Gary VanLandingham

Evaluators have long sought a world in which our work makes a tangible difference to society, but that goal has often seemed out of reach. However, in recent years, advocates have proclaimed an era of evidence-based policymaking in which the What Works data generated by evaluations will be increasingly used to inform programme and policy choices. Four primary factors have been critical to the rise of this approach – attaining a critical mass of curated What Works’ evidence, growing interest among political leaders in considering this information when making choices, new budgetary mechanisms for using these data and new tools that facilitate rigorous outcome studies. However, the movement also faces critical challenges including the growing distrust of empirical data among some political factions, leaks in the evaluation pipeline that generates data to identify What Works and the replication failure of many evidence-based interventions. The evaluation field should support this movement through efforts to plug leaks in the evidence pipeline, stronger efforts to assess implementation challenges, training students in evidence-based approaches and assisting in outreach to policymakers.


2020 ◽  
Vol 117 (19) ◽  
pp. 10378-10387 ◽  
Author(s):  
Qiaoyu Lin ◽  
Bin Yu ◽  
Xiangyang Wang ◽  
Shicong Zhu ◽  
Gan Zhao ◽  
...  

Barrier-to-autointegration factor (BAF) is a highly conserved protein in metazoans that has multiple functions during the cell cycle. We found that BAF is SUMOylated at K6, and that this modification is essential for its nuclear localization and function, including nuclear integrity maintenance and DNA replication. K6-linked SUMOylation of BAF promotes binding and interaction with lamin A/C to regulate nuclear integrity. K6-linked SUMOylation of BAF also supports BAF binding to DNA and proliferating cell nuclear antigen and regulates DNA replication. SENP1 and SENP2 catalyze the de-SUMOylation of BAF at K6. Disrupting the SUMOylation and de-SUMOylation cycle of BAF at K6 not only disturbs nuclear integrity, but also induces DNA replication failure. Taken together, our findings demonstrate that SUMOylation at K6 is an important regulatory mechanism that governs the nuclear functions of BAF in mammalian cells.


2020 ◽  
Vol 29 (3) ◽  
pp. 270-288 ◽  
Author(s):  
Friederike Hendriks ◽  
Dorothe Kienhues ◽  
Rainer Bromme

In methodological and practical debates about replications in science, it is (often implicitly) assumed that replications will affect public trust in science. In this preregistered experiment ( N = 484), we varied (a) whether a replication attempt was successful or not and (b) whether the replication was authored by the same, or another lab. Results showed that ratings of study credibility (e.g. evidence strength, ηP2 = .15) and researcher trustworthiness (e.g. expertise, ηP2 = .15) were rated higher upon learning of replication success, and lower in case of replication failure. The replication’s author did not make a meaningful difference. Prior beliefs acted as covariate for ratings of credibility, but not trustworthiness, while epistemic beliefs regarding the certainty of knowledge were a covariate to both. Hence, laypeople seem to notice that successfully replicated results entail higher epistemic significance, while possibly not taking into account that replications should be conducted by other labs.


2019 ◽  
Vol 48 (9) ◽  
pp. 611-613
Author(s):  
Jeffrey Valentine

This commentary addresses three issues raised in the articles in this issue. First, conversations about replication efforts should begin with a reasonable and agreed-upon definition of what it means to say that a study did or did not replicate the results of another study. Second, if a replication failure has been identified, using the surface similarity of the studies to reverse-engineer an explanation is unlikely to be helpful. Finally, researchers and consumers should expect small and heterogeneous effects, and this fact points to the need to think meta-analytically.


2019 ◽  
Vol 48 (9) ◽  
pp. 599-607 ◽  
Author(s):  
James S. Kim

Why, when so many educational interventions demonstrate positive impact in tightly controlled efficacy trials, are null results common in follow-up effectiveness trials? Using case studies from literacy, this article suggests that replication failure can surface hidden moderators—contextual differences between an efficacy and an effectiveness trial—and generate new hypotheses and questions to guide future research. First, replication failure can reveal systemic barriers to program implementation. Second, it can highlight for whom and in what contexts a program theory of change works best. Third, it suggests that a fidelity first and adaptation second model of program implementation can enhance the effectiveness of evidence-based interventions and improve student outcomes. Ultimately, researchers can make every study count by learning from both replication success and failure to improve the rigor, relevance, and reproducibility of intervention research.


Sign in / Sign up

Export Citation Format

Share Document