scholarly journals Recommendations for Increasing Replicability in Psychology

2013 ◽  
Vol 27 (2) ◽  
pp. 108-119 ◽  
Author(s):  
Jens B. Asendorpf ◽  
Mark Conner ◽  
Filip De Fruyt ◽  
Jan De Houwer ◽  
Jaap J. A. Denissen ◽  
...  

Replicability of findings is at the heart of any empirical science. The aim of this article is to move the current replicability debate in psychology towards concrete recommendations for improvement. We focus on research practices but also offer guidelines for reviewers, editors, journal management, teachers, granting institutions, and university promotion committees, highlighting some of the emerging and existing practical solutions that can facilitate implementation of these recommendations. The challenges for improving replicability in psychological science are systemic. Improvement can occur only if changes are made at many levels of practice, evaluation, and reward. Copyright © 2013 John Wiley & Sons, Ltd.

2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.


2020 ◽  
Author(s):  
D. Stephen Lindsay

Psychological scientists strive to advance understanding of how and why we animals do and think and feel as we do. This is difficult, in part because flukes of chance and measurement error obscure researchers’ perceptions. Many psychologists use inferential statistical tests to peer through the murk of chance and discern relationships between variables. Those tests are powerful tools, but they must be wielded with skill. Moreover, research reports must convey to readers a detailed and accurate understanding of how the data were obtained and analyzed. Research psychologists often fall short in those regards. This paper attempts to motivate and explain ways to enhance the transparency and replicability of psychological science. Specifically, I speak to how publication bias and p hacking contribute to effect-size exaggeration in the published literature, and how effect-size exaggeration contributes, in turn, to replication failures. Then I present seven steps toward addressing these problems: Telling the truth; upgrading statistical knowledge; standardizing aspects of research practices; documenting lab procedures in a lab manual; making materials, data, and analysis scripts transparent; addressing constraints on generality; and collaborating.


2019 ◽  
Author(s):  
Daniel Lakens

For over two centuries researchers have been criticized for using research practices that makes it easier to present data in line with what they wish to be true. With the rise of the internet it has become easier to preregister the theoretical and empirical basis for predictions, the experimental design, the materials, and the analysis code. Whether the practice of preregistration is valuable depends on your philosophy of science. Here, I provide a conceptual analysis of the value of preregistration for psychological science from an error statistical philosophy (Mayo, 2018). Preregistration has the goal to allow others to transparently evaluate the capacity of a test to falsify a prediction, or the severity of a test. Researchers who aim to test predictions with severity should find value in the practice of preregistration. I differentiate the goal of preregistration from positive externalities, discuss how preregistration itself does not make a study better or worse compared to a non-preregistered study, and highlight the importance of evaluating the usefulness of a tool such as preregistration based on an explicit consideration of your philosophy of science.


2018 ◽  
Author(s):  
Olivier Klein ◽  
Tom Elis Hardwicke ◽  
Frederik Aust ◽  
Johannes Breuer ◽  
Henrik Danielsson ◽  
...  

The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. There has never been a better time to embrace transparent research practices. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research. Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.


2019 ◽  
Author(s):  
Simon Dennis ◽  
Paul Michael Garrett ◽  
Hyungwook Yim ◽  
Jihun Hamm ◽  
Adam F Osth ◽  
...  

Pervasive internet and sensor technologies promise to revolutionize psychological science. However, the data collected using these technologies is often very personal - indeed the value of the data is often directly related to how personal it is. At the same time, driven by the replication crisis, there is a sustained push to publish data to open repositories. These movements are in fundamental conflict. In this paper, we propose a way to navigate this issue. We argue that there are significant advantages to be gained by ceding the ownership of data to the participants who generate it. Then we provide desiderata for a privacy-preserving platform. In particular, we suggest that researchers should use an interface to perform experiments and run analyses rather than observing the stimuli themselves. We argue that this method not only improves privacy but will also encourage greater compliance with good research practices than is possible with open repositories.


2021 ◽  
Author(s):  
Brian A. Nosek ◽  
Tom Elis Hardwicke ◽  
Hannah Moshontz ◽  
Aurélien Allard ◽  
Katherine S. Corker ◽  
...  

Replication, an important, uncommon, and misunderstood practice, is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understanding to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understanding and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges, such as disincentives to conduct replications, framing of replication as personal attack rather than healthy scientific practice, and headwinds for replication contributing to self-correction. Nevertheless, innovation in doing and understanding replication, and its cousins, reproducibility and robustness, have positioned psychology to improve research practices and accelerate progress.


2017 ◽  
Vol 12 (4) ◽  
pp. 660-664 ◽  
Author(s):  
Scott O. Lilienfeld

The past several years have been a time for soul searching in psychology, as we have gradually come to grips with the reality that some of our cherished findings are less robust than we had assumed. Nevertheless, the replication crisis highlights the operation of psychological science at its best, as it reflects our growing humility. At the same time, institutional variables, especially the growing emphasis on external funding as an expectation or de facto requirement for faculty tenure and promotion, pose largely unappreciated hazards for psychological science, including (a) incentives for engaging in questionable research practices, (b) a single-minded focus on programmatic research, (c) intellectual hyperspecialization, (d) disincentives for conducting direct replications, (e) stifling of creativity and intellectual risk taking, (f) researchers promising more than they can deliver, and (g) diminished time for thinking deeply. Preregistration should assist with (a), but will do little about (b) through (g). Psychology is beginning to right the ship, but it will need to confront the increasingly deleterious impact of the grant culture on scientific inquiry.


2019 ◽  
Author(s):  
Richard Ramsey

The credibility of psychological science has been questioned recently, due to low levels of reproducibility and the routine use of inadequate research practices (Chambers, 2017; Open Science Collaboration, 2015; Simmons, Nelson, & Simonsohn, 2011). In response, wide-ranging reform to scientific practice has been proposed (e.g., Munafò et al., 2017), which has been dubbed a “credibility revolution” (Vazire, 2018). My aim here is to advocate why and how we should embrace such reform, and discuss the likely implications.


2020 ◽  
Author(s):  
Soufian Azouaghe ◽  
Adeyemi Adetula ◽  
Patrick S. Forscher ◽  
Dana Basnight-Brown ◽  
Nihal Ouherrou ◽  
...  

The quality of scientific research is assessed not only by its positive impact on socio-economic development and human well-being, but also by its contribution to the development of valid and reliable scientific knowledge. Thus, researchers regardless of their scientific discipline, are supposed to adopt research practices based on transparency and rigor. However, the history of science and the scientific literature teach us that a part of scientific results is not systematically reproducible (Ioannidis, 2005). This is what is commonly known as the "replication crisis" which concerns the natural sciences as well as the social sciences, of which psychology is no exception.Firstly, we aim to address some aspects of the replication crisis and Questionable Research Practices (QRPs). Secondly, we discuss how we can involve more labs in Africa to take part in the global research process, especially the Psychological Science Accelerator (PSA). For these goals, we will develop a tutorial for the labs in Africa, by highlighting the open science practices. In addition, we emphasize that it is substantial to identify African labs needs and factors that hinder their participating in the PSA, and the support needed from the Western world. Finally, we discuss how to make psychological science more participatory and inclusive.


2020 ◽  
Author(s):  
NiCole Buchanan ◽  
Marisol Perez ◽  
Mitch Prinstein ◽  
Idia Thurston

As efforts to end systemic racism gain momentum across various contexts, it is critical to consider anti-racist steps that will be required to improve psychological science. Current scientific practices serve to maintain white supremacy with significant and impactful consequences. Extant research practices reinforce norms of homogeneity within BIPOC (Black, Indigenous, and other People of Color) populations, segregate theories and methods derived from BIPOC groups, apply disparate standards to the evaluation of research on White vs. BIPOC populations, and discourage BIPOC scholars from pursuing research careers. Perhaps consequently, mental and physical health disparities remain largely unimproved. In this article we present examples of how epistemic oppression exists within psychological science, including how science is conducted, reported, reviewed, and disseminated. Specific recommendations are offered for many stakeholders, including those involved in the production, reporting, and gatekeeping of science as well as consumers of science. Additionally, we present a diversity accountability index for journals with potential benchmarks for measuring progress as one strategy to promote dialogue and action, challenge inequity, and upend the influence of white supremacy in psychological science.


Sign in / Sign up

Export Citation Format

Share Document