reasoning errors
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0249051
Author(s):  
Ans Vercammen ◽  
Alexandru Marcoci ◽  
Mark Burgman

Groups have access to more diverse information and typically outperform individuals on problem solving tasks. Crowdsolving utilises this principle to generate novel and/or superior solutions to intellective tasks by pooling the inputs from a distributed online crowd. However, it is unclear whether this particular instance of “wisdom of the crowd” can overcome the influence of potent cognitive biases that habitually lead individuals to commit reasoning errors. We empirically test the prevalence of cognitive bias on a popular crowdsourcing platform, examining susceptibility to bias of online panels at the individual and aggregate levels. We then investigate the use of the Cognitive Reflection Test, notable for its predictive validity for both susceptibility to cognitive biases in test settings and real-life reasoning, as a screening tool to improve collective performance. We find that systematic biases in crowdsourced answers are not as prevalent as anticipated, but when they occur, biases are amplified with increasing group size, as predicted by the Condorcet Jury Theorem. The results further suggest that pre-screening individuals with the Cognitive Reflection Test can substantially enhance collective judgement and improve crowdsolving performance.


2021 ◽  
Author(s):  
Paula Carvalho ◽  
Danielle Caled ◽  
Mário J. Silva ◽  
Bruno Martins ◽  
João Paulo Carvalho ◽  
...  

Abstract The development of explainable news credibility prediction models is critical both for fighting the viral propagation of misinformation and improving media literacy. This work investigates a variety of content indicators approaching different semantic and discourse dimensions, such as title representativeness, reasoning errors, and sentiment intensity. These indicators were inspired by a previous study conducted for English news, aimed at reaching a collective consensus on which indicators could be widely used for predicting news credibility. This new study, performed by a multi-disciplinary team, relies on a corpus of 80 news articles from Portuguese mainstream and alternative news media, which were annotated by junior and senior journalists. The assessment of the corpus annotations provides insight into the prevalence of different indicators in each type of news source. The results obtained for Portuguese correlate in most cases with the ones reported for English, which motivates the adoption of common standards for supporting the collaborative development of interoperable automatic misinformation detection approaches.


2020 ◽  
Vol 40 (4) ◽  
pp. 605-628
Author(s):  
Yi Song ◽  
Szu-Fu Chao ◽  
Yigal Attali

We designed scaffolded tasks that targeted the skill of identifying reasoning errors and conducted a study with 472 middle school students. The study results showed a small positive impact of the scaffolding on student performance on one topic, but not the other, indicating that student skills of writing critiques could be affected by the topic and argument content. Additionally, students from low-SES families did not perform as well as their peers. Student performance on the critique tasks had moderate or strong correlations with students’ state reading and writing test scores. Implications of the scaffolding and critique task design are discussed.


2020 ◽  
Vol 26 (2) ◽  
pp. 106-115
Author(s):  
Mariusz Urbański

Since the end of the XX century we are witnessing a practical, or cognitive, turn in logic. Drawing on enormous achievements brought about by the mathematical turn that started more than a hundred years ago, logic now has came back to its Artistotelian roots as an instrument by which we come to know anything. The re-forged alliance between logic – now well equipped with sophisticated formal tools – and psychology results in more and more substantial developments in studies on human reasoning and problem solving. To reap the fruits of this alliance we need to be aware that it leads to a shift in focal points of interest of such studies as well as to expansion of their methodological repertoire. In this paper I argue that the practical turn in logic results in: (1) the concept of error becoming crucial for formal modeling of human reasoning processes, (2) prescriptive perspective, which takes into account human limitations in information processing, becoming the most interesting vantage point for such research and (3) triangulation of formal methods, quantitative approach and qualitative analyses becoming most effective methodology in formal modeling studies.


2020 ◽  
Vol 73 (10) ◽  
pp. 1695-1702
Author(s):  
André Mata

Research on problem-solving, judgement, and decision making documents systematic reasoning errors. Such errors are often attributed to reasoning shortcomings, an inability to think properly. However, recent research suggests another cause for those errors: insufficient attention to the critical premises in a problem, resulting in miscomprehension, such that, even if a person is capable of reasoning properly, she will fail to solve the problem correctly if she is operating on wrong premises. The first study in this article provided further evidence for this comprehension account of reasoning errors: Performance on reasoning problems was found to relate to verbal comprehension on a separate task. This suggests that reasoning errors are in part due to lack of comprehension. The upside of this account is that it should be possible to improve reasoning performance by drawing attention to the critical premises. Three additional studies provided consistent evidence for this hypothesis, showing that the same participants who at first proved unable to solve certain problems correctly were able to overcome this inability and performed better when simple attention-capturing devices drew their attention to the critical premises.


2020 ◽  
Author(s):  
Katya Tentori

In this chapter, I will briefly summarize and discuss the main results obtained from more than three decades of studies on the conjunction fallacy (hereafter CF) and will argue that this striking and widely debated reasoning error is a robust phenomenon that can systematically affect laypeople’s as much as experts’ probabilistic inferences, with potentially relevant real-life consequences. I will then introduce what is, in my view, the best explanation for the CF and indicate how it allows the reconciliation of some classic probabilistic reasoning errors with the outstanding reasoning performances that humans have been shown capable of. Finally, I will tackle the open issue of the greater accuracy and reliability of evidential impact assessments over those of posterior probability and outline how further research on this topic might also contribute to the development of effective human-like computing.


Uncertainty ◽  
2019 ◽  
pp. 3-18
Author(s):  
Kostas Kampourakis ◽  
Kevin McCain

Whereas we may think that knowledge requires certainty, this is not the case. We may be psychologically certain about something and still be wrong because of faults in our perception, because we are deceived, or because of reasoning errors. Epistemic certainty is impossible to achieve because we cannot be epistemically certain about anything. Fortunately, neither epistemic nor psychological certainty is a requirement for knowledge. Rather, what knowledge requires is good, solid evidence on the basis of which we can make choices and decisions. The better the evidence is, the better the choices and decisions can be. Thus, we can lead our lives successfully by relying on good evidence. We can certainly live with uncertainty.


2019 ◽  
Author(s):  
Lace Padilla

Given the widespread use of visualizations and their impact on health and safety, it is important to ensure that viewers interpret visualizations as accurately as possible. Ensemble visualizations are an increasingly popular method for visualizing data, as emerging research demonstrates that ensembles can effectively and intuitively communicate traditionally difficult statistical concepts. While a few studies have identified drawbacks to ensemble visualizations, no studies have identified the sources of reasoning biases that could occur with ensemble visualizations. Our previous work with hurricane forecast simulation ensemble visualizations identified a misunderstanding that could have resulted from the visual features of the display. The current study tested the hypothesis that visual-spatial biases, which are biases that are a direct result of the visualization technique, provide a cognitive mechanism to explain this misunderstanding. In three experiments, we tested the role of the visual elements of ensemble visualizations as well as knowledge about the visualization with novice participants (n = 303). The results suggest that previously documented reasoning errors with ensemble displays can be influenced both by changes to the visualization technique and by top-down knowledge-driven processing.


Sign in / Sign up

Export Citation Format

Share Document