scholarly journals Blueberries and Cognitive Ability: A Test of Publication Bias and Questionable Research Practices

2020 ◽  
Vol 75 (8) ◽  
pp. e22-e23
Author(s):  
Christopher R Brydges ◽  
Laura Gaeta
2019 ◽  
Author(s):  
Christopher Brydges ◽  
Laura Gaeta

Accepted in Journals of Gerontology Series A: Biological Sciences and Medical Sciences as a letter to the editor.


2019 ◽  
Author(s):  
Gregory Francis ◽  
Evelina Thunell

Based on findings from six experiments, Dallas, Liu & Ubel (2019) concluded that placing calorie labels to the left of menu items influences consumers to choose lower calorie food options. Contrary to previously reported findings, they suggested that calorie labels do influence food choices, but only when placed to the left because they are in this case read first. If true, these findings have important implications for the design of menus and may help address the obesity pandemic. However, an analysis of the reported results indicates that they seem too good to be true. We show that if the effect sizes in Dallas et al. (2019) are representative of the populations, a replication of the six studies (with the same sample sizes) has a probability of only 0.014 of producing uniformly significant outcomes. Such a low success rate suggests that the original findings might be the result of questionable research practices or publication bias. We therefore caution readers and policy makers to be skeptical about the results and conclusions reported by Dallas et al. (2019).


2021 ◽  
pp. 56-90
Author(s):  
R. Barker Bausell

The linchpin of both publication bias and irreproducibility involves an exhaustive list of more than a score of individually avoidable questionable research practices (QRPs) supplemented by 10 inane institutional research practices. While these untoward effects on the production of false-positive results are unsettling, a far more entertaining (in a masochistic sort of way) pair of now famous iconoclastic experiments conducted by Simmons, Nelson, and Simonsohn are presented in which, with the help of only a few well-chosen QRPs, research participants can actually become older after simply listening to a Beatle’s song. In addition, surveys designed to estimate the prevalence of these and other QRPs in the published literatures are also described.


2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


2021 ◽  
pp. 074193252199645
Author(s):  
Bryan G. Cook ◽  
Daniel M. Maggin ◽  
Rachel E. Robertson

This article introduces a special series of registered reports in Remedial and Special Education. Registered reports are an innovative approach to publishing that aim to increase the credibility of research. Registered reports are provisionally accepted for publication before a study is conducted, based on the importance of the research questions and the rigor of the proposed methods. If provisionally accepted, the journal agrees to publish the study if researchers adhere to accepted plans and report the study appropriately, regardless of study findings. In this article, we describe how registered reports work, review their benefits (e.g., combatting questionable research practices and publication bias, allowing expert reviewers to provide constructive feedback before a study is conducted) and limitations (e.g., requires additional time and effort, cannot be applied to all studies), review the application of registered reports in education and special education, and make recommendations for implementing registered reports in special education.


2021 ◽  
Author(s):  
Bryan G. Cook ◽  
Daniel Maggin ◽  
Rachel Robertson

This paper introduces a special series of registered reports in Remedial and Special Education. Registered reports are an innovative approach to publishing that aim to increase the credibility of research. Registered reports are provisionally accepted for publication before a study is conducted, based on the importance of the research questions and the rigor of the proposed methods. If provisionally accepted, the journal agrees to publish the study if researchers adhere to accepted plans and report the study appropriately, regardless of study findings. In this article, we describe how registered reports work, review their benefits (e.g., combatting questionable research practices and publication bias, allowing expert reviewers to provide constructive feedback before a study is conducted) and limitations (e.g., requires additional time and effort, cannot be applied to all studies), review the application of registered reports in education and special education, and make recommendations for implementing registered reports in special education.


Author(s):  
Holly L. Storkel ◽  
Frederick J. Gallun

Purpose: This editorial introduces the new registered reports article type for the Journal of Speech, Language, and Hearing Research . The goal of registered reports is to create a structural solution to address issues of publication bias toward results that are unexpected and sensational, questionable research practices that are used to produce novel results, and a peer-review process that occurs at the end of the research process when changes in fundamental design are difficult or impossible to implement. Conclusion: Registered reports can be a positive addition to scientific publications by addressing issues of publication bias, questionable research practices, and the late influence of peer review. This article type does so by requiring reviewers and authors to agree in advance that the experimental design is solid, the questions are interesting, and the results will be publishable regardless of the outcome. This procedure ensures that replication studies and null results make it into the published literature and that authors are not incentivized to alter their analyses based on the results that they obtain. Registered reports represent an ongoing commitment to research integrity and finding structural solutions to structural problems inherent in a research and publishing landscape in which publications are such a high-stakes aspect of individual and institutional success.


2018 ◽  
Author(s):  
Christopher Brydges

Objectives: Research has found evidence of publication bias, questionable research practices (QRPs), and low statistical power in published psychological journal articles. Isaacowitz’s (2018) editorial in the Journals of Gerontology Series B, Psychological Sciences called for investigation of these issues in gerontological research. The current study presents meta-research findings based on published research to explore if there is evidence of these practices in gerontological research. Method: 14,481 test statistics and p values were extracted from articles published in eight top gerontological psychology journals since 2000. Frequentist and Bayesian caliper tests were used to test for publication bias and QRPs (specifically, p-hacking and incorrect rounding of p values). A z-curve analysis was used to estimate average statistical power across studies.Results: Strong evidence of publication bias was observed, and average statistical power was approximately .70 – below the recommended .80 level. Evidence of p-hacking was mixed. Evidence of incorrect rounding of p values was inconclusive.Discussion: Gerontological research is not immune to publication bias, QRPs, and low statistical power. Researchers, journals, institutions, and funding bodies are encouraged to adopt open and transparent research practices, and using Registered Reports as an alternative article type to minimize publication bias and QRPs, and increase statistical power.


2017 ◽  
Author(s):  
Evan C Carter ◽  
Felix D. Schönbrodt ◽  
Will M Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.


2020 ◽  
Vol 4 ◽  
Author(s):  
Evelina Thunell ◽  
Gregory Francis

Based on findings from six experiments, Dallas, Liu, and Ubel (2019) conclude that placing calorie labels to the left of menu items influences consumers to choose lower calorie food options. Contrary to previously reported findings, they suggest that calorie labels can influence food choices, but only when placed to the left because they are in this case read first. If true, these findings have important implications for the design of menus and may help address the obesity pandemic. However, an analysis of the reported results indicates that they seem too good to be true. We show that if the effect sizes in Dallas et al. (2019) are representative of the populations, a replication of the six studies (with the same sample sizes) has a probability of only 0.014 of producing uniformly significant outcomes. Such a low success rate suggests that the original findings might be the result of questionable research practices or publication bias. We therefore caution readers and policy makers to be skeptical about the results and conclusions reported by Dallas et al. (2019).


Sign in / Sign up

Export Citation Format

Share Document