"Competition between multiple causes of a single outcome in causal reasoning": Correction to Darredeau et al. (2009).

2009 ◽  
Vol 35 (2) ◽  
pp. 278-278
Author(s):  
Christine Darredeau ◽  
Irina Baetu ◽  
Andrew G. Baker ◽  
Robin A. Murphy
2009 ◽  
Vol 35 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Christine Darredeau ◽  
Irina Baetu ◽  
Andrew G. Baker ◽  
Robin A. Murphy

2013 ◽  
Author(s):  
Robert I. Bowers ◽  
William D. Timberlake
Keyword(s):  

2006 ◽  
Vol 63 (1) ◽  
pp. 144 ◽  
Author(s):  
Marcel G. M. Olde Rikkert ◽  
Wiesje M. van der Flier ◽  
Frank Erik deLeeuw ◽  
Marcel Verbeek ◽  
René W. M. M. Jansen ◽  
...  

2021 ◽  
Vol 7 ◽  
pp. 237802312110244
Author(s):  
Katrin Auspurg ◽  
Josef Brüderl

In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much. We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.


Author(s):  
Michael Shreeves ◽  
Leo Gugerty ◽  
DeWayne Moore

Abstract Background Research on causal reasoning often uses group-level data analyses that downplay individual differences and simple reasoning problems that are unrepresentative of everyday reasoning. In three empirical studies, we used an individual differences approach to investigate the cognitive processes people used in fault diagnosis, which is a complex diagnostic reasoning task. After first showing how high-level fault diagnosis strategies can be composed of simpler causal inferences, we discussed how two of these strategies—elimination and inference to the best explanation (IBE)—allow normative performance, which minimizes the number of diagnostic tests, whereas backtracking strategies are less efficient. We then investigated whether the use of normative strategies was infrequent and associated with greater fluid intelligence and positive thinking dispositions and whether normative strategies used slow, analytic processing while non-normative strategies used fast, heuristic processing. Results Across three studies and 279 participants, uses of elimination and IBE were infrequent, and most participants used inefficient backtracking strategies. Fluid intelligence positively predicted elimination and IBE use but not backtracking use. Positive thinking dispositions predicted avoidance of backtracking. After classifying participants into groups that consistently used elimination, IBE, and backtracking, we found that participants who used elimination and IBE made fewer, but slower, diagnostic tests compared to backtracking users. Conclusions Participants’ fault diagnosis performance showed wide individual differences. Use of normative strategies was predicted by greater fluid intelligence and more open-minded and engaged thinking dispositions. Elimination and IBE users made the slow, efficient responses typical of analytic processing. Backtracking users made the fast, inefficient responses suggestive of heuristic processing.


Sign in / Sign up

Export Citation Format

Share Document