scholarly journals Seeing, fast and slow: the effects of processing time on perceptual bias

2019 ◽  
Author(s):  
Ron Dekel ◽  
Dov Sagi

AbstractFast and slow decisions exhibit distinct behavioral properties, such as the presence of decision bias in faster but not slower responses. This dichotomy is currently explained by assuming that distinct cognitive processes map to separate brain mechanisms. Here, we suggest an alternative, single-process account based on the stochastic properties of decision processes. Our experimental results show perceptual biases in a variety of tasks (specifically: learned priors, tilt illusion, and tilt aftereffect) that were much reduced with increasing reaction time. To account for this, we consider a simple yet general explanation: prior and noisy decision-related evidence are integrated serially, with evidence and noise accumulating over time (as in the standard drift diffusion model). With time, owing to noise accumulation, the prior effect is predicted to diminish. This illustrates that a clear behavioral separation – presence vs. absence of bias – may reflect a simple stochastic mechanism.HighlightsPerceptual and decisional biases are reduced in slower decisions.Simple mechanistic single-process account for slow bias-free decisions.Signal detection theory criterion is ~zero in decision times>median.

2019 ◽  
Author(s):  
Ron Dekel ◽  
Dov Sagi

AbstractFollowing exposure to an oriented stimulus, the perceived orientation is slightly shifted, a phenomenon termed the tilt aftereffect (TAE). This estimation bias, as well as other context-dependent biases, is speculated to reflect statistical mechanisms of inference that optimize visual processing. Importantly, although measured biases are extremely robust in the population, the magnitude of individual bias can be extremely variable. For example, measuring different individuals may result in TAE magnitudes that differ by a factor of 5. Such findings appear to challenge the accounts of bias in terms of learned statistics: is inference so different across individuals? Here, we found that a strong correlation exists between reaction time and TAE, with slower individuals having much less TAE. In the tilt illusion, the spatial analogue of the TAE, we found a similar, though weaker, correlation. These findings can be explained by a theory predicting that bias, caused by a change in the initial conditions of evidence accumulation (e.g., prior), decreases with decision time (Dekel & Sagi, 2019b). We contend that the context-dependence of visual processing is more homogeneous in the population than was previously thought, with the measured variability of perceptual bias explained, at least in part, by the flexibility of decision-making. Homogeneity in processing might reflect the similarity of the learned statistics.HighlightsThe tilt aftereffect (TAE) exhibits large individual differences.Reduced TAE magnitudes are found in slower individuals.Reduced TAE in slower decisions can be explained by the reduced influence of prior.Therefore, individual variability can reflect decision making flexibility.


Author(s):  
Laura R. Winer ◽  
Richard F. Schmid

The present study maintains that consistently effective leaming materials can best be generated if the prescriptions instructional designers use are founded on learning theory. It is also considered critical that cognitive processes central to the task demands and strategies employed to address them be established. To be practical, we further recommend that only a single, process-oriented lesson, rather than individualized treatment, be implemented. Instructional simulations met these criteria, being tightly bound to Bruner's theoretical approach, and inherently capable of addressing aptitude deficiencies. Subjects were assessed for spatial visualization ability, grouped, randomly assigned to simulation or non-simulation treatments, and tested immediately, one week, and five weeks after instruction. The simulation significantly increased the high-aptitude learners' efficiency (and initially effectiveness), and low-aptitude learners' effectiveness. The validity of a theory-based, aptitude-enhancing, standardized approach was supported, and is discussed.


Author(s):  
Roland H. Grabner ◽  
Clemens Brunner ◽  
Valerie Lorenz ◽  
Stephan E. Vogel ◽  
Bert De Smedt

ABSTRACTThere is broad consensus that adults solve single-digit multiplication problems almost exclusively by fact retrieval (i.e., retrieval of the solution from an arithmetic fact network). In contrast, there has been a long-standing debate on the cognitive processes involved in solving single-digit addition problems. This debate has evolved around two theoretical accounts. The fact-retrieval account postulates that these are solved through fact retrieval, just like multiplications, whereas the compacted-procedure account proposes that solving very small additions (i.e., problems with operands between 1 and 4) involves highly automatized and unconscious compacted procedures. In the present electroencephalography (EEG) study, we put these two accounts to the test by comparing neurophysiological correlates of solving very small additions and multiplications. A sample of 40 adults worked on an arithmetic production task involving all (non-tie) single-digit additions and multiplications. Afterwards, participants completed trial-by-trial strategy self-reports. In our EEG analyses, we focused on induced activity (event-related synchronization/desynchronization, ERS/ERD) in three frequency bands (theta, lower alpha, upper alpha). Across all frequency bands, we found higher evidential strength for similar rather than different neurophysiological processes accompanying the solution of very small addition and multiplication problems. This was also true when n + 1 and n × 1 problems were excluded from the analyses. In two additional analyses, we showed that ERS/ERD can differentiate between self-reported problem-solving strategies (retrieval vs. procedure) and even between n + 1 and n + m problems in very small additions, demonstrating its high sensitivity to cognitive processes in arithmetic. The present findings clearly support the fact-retrieval account, suggesting that both very small additions and multiplications are solved through fact retrieval.HIGHLIGHTSNeurophysiological test of fact retrieval and compacted procedures accountInduced EEG data are sensitive to cognitive processes in arithmetic problem solvingBoth very small additions and multiplications are solved through fact retrieval


2021 ◽  
Author(s):  
Marius Frenken ◽  
Wanja Hemmerich ◽  
David Izydorczyk ◽  
Sophie Elisabeth Scharf ◽  
Roland Imhoff

A rich body of research points to racial biases in so-called police officer dilemma tasks: participants are generally faster and less error-prone to “shoot” (vs. not “shoot”) Black (vs. White) targets. In three experimental (and two supplemental) studies (total N = 914), we aimed at examining the cognitive processes underlying these findings under fully standardized conditions. To be able to dissect a-priori decision bias, biased information processing and motor preparation, we rendered video sequences of virtual avatars that differed in nothing but the tone of their skin. Modeling the data via drift diffusion models revealed that the threat of a social group can be explicitly learned and mapped accordingly on an a-priori response bias within the model (Study 1). Studies 2 and 3 replicated the racial shooter bias as apparent in faster reaction times in stereotype-consistent trials. This, however, appears to result from stereotype-consistent motoric preparations and execution readiness, but not from pre-judicial threat biases. The results have implications especially for automatic stereotypes in the public.


Assessment ◽  
2020 ◽  
pp. 107319112096231
Author(s):  
Elad Omer ◽  
Tomer Elbaum ◽  
Yoram Braw

Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees’ speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model ( N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.


2017 ◽  
Author(s):  
Matthias Stangl ◽  
Jonathan Shine ◽  
Thomas Wolbers

AbstractHuman fMRI studies examining the putative firing of grid cells (i.e., the grid code) suggest that this cellular mechanism supports not only spatial navigation, but also more abstract cognitive processes. This research area, however, remains relatively unexplored, perhaps us to the complexities of data analysis. To overcome this, we have developed the Matlab-based Grid Code Analysis Toolbox (GridCAT), providing a graphical user interface, and open-source code, for the analysis of fMRI data. The GridCAT performs all analyses, from estimation and fitting of the grid code in the general linear model, to the generation of grid code metrics and plots. Moreover, it is flexible in allowing the specification of bespoke analysis pipelines; example data are provided to demonstrate the GridCAT’s main functionality. We believe the GridCAT is essential to opening this research area to the imaging community, and helping to elucidate the role of human grid codes in higher-order cognitive processes.HighlightsThe putative firing of grid cells (i.e., the grid code) can be examined using fMRINecessary steps for grid code analysis are reviewedThe Matlab-based grid code analysis toolbox (GridCAT) is introducedAutomated grid code analysis can be conducted either via a graphical user interface or open-source codeA detailed manual and an example dataset are provided


Author(s):  
Laura R. Winer ◽  
Richard F. Schmid

The present study maintains that consistently effective leaming materialscan best be generated if the prescriptions instructional designers use are founded on learning theory. It is also considered critical that cognitive processes central to the task demands and strategies employed to address them be established. To be practical, we further recommend that only a single, process-oriented lesson, rather than individualized treatment, be implemented. Instructional simulations met these criteria, being tightly bound to Bruner's theoretical approach, and inherently capable of addressing aptitude deficiencies. Subjects were assessed for spatial visualization ability, grouped, randomly assigned to simulation or non-simulation treatments, and tested immediately, one week, and five weeks after instruction. The simulation significantly increased the high-aptitude learners' efficiency (and initially effectiveness), and low-aptitude learners' effectiveness. The validity of a theory-based, aptitude-enhancing, standardized approach was supported, andis discussed.


Author(s):  
Silvia Formica ◽  
Carlos González-García ◽  
Mehdi Senoussi ◽  
Marcel Brass

AbstractHumans are capable of flexibly converting symbolic instructions into novel behaviors. Previous evidence and theoretical models suggest that the implementation of a novel instruction requires the reformatting of its declarative content into an action-oriented code optimized for the execution of the instructed behavior. While neuroimaging research focused on identifying the brain areas involved in such a process, the temporal and electrophysiological mechanisms remain poorly understood. These mechanisms, however, can provide information about the specific cognitive processes that characterize the proceduralization of information. In the present study, we recorded EEG activity while we asked participants to either simply maintain declaratively the content of novel S-R mappings or to proactively prepare for their implementation. By means of time-frequency analyses, we isolated the oscillatory features specific to the proceduralization of instructions. Implementation of the instructed mappings elicited stronger theta activity over frontal electrodes and suppression in mu and beta activity over central electrodes. On the contrary, activity in the alpha band, which has been shown to track the attentional deployment to task-relevant items, showed no differences between tasks. Together, these results support the idea that proceduralization of information is characterized by specific component processes such as orchestrating complex task settings and configuring the motor system that are not observed when instructions are held in a declarative format.HighlightsFrontal theta power is increased during instructions implementationAttentional orienting in WM is analogous across maintenance and implementationInstructions implementation involves motor recruitment


2021 ◽  
Author(s):  
Michelle Donzallaz ◽  
Julia M. Haaf ◽  
Claire Stevenson

When producing creative ideas (i.e., ideas that are original and useful) two main processes occur: ideation, where people brainstorm ideas, and evaluation, where they decide if the ideas are creative or not. While much is known about the ideation phase, the cognitive processes involved in creativity evaluation are largely unclear. In this paper, we present a novel modeling approach for the evaluation phase of creativity. We apply the drift diffusion model (DDM) to the creative-or-not (CON)-task to study the cognitive basis of evaluation and to examine individual differences in the extent to which people take originality and utility into account when evaluating creative ideas. The CON-task is a timed decision-making task where participants indicate whether they find uses for certain objects creative or not (e.g., using a book as a buoy). The different use items vary on the two creativity dimensions ‘originality’ and ‘utility’. In two studies (n = 293, 17806 trials; n = 152, 9291 trials), we found that stimulus originality was strongly related to participants’ drift rate, whereas stimulus utility was only somewhat associated with the drift rate. However, participants differed substantially in the effects of originality and utility. Furthermore, the implicit weights assigned to originality and utility on the CON-task were aligned with self-reported importance ratings of originality and utility and associated with divergent thinking performance. Our findings underline the importance of communicating rating criteria in divergent thinking tasks such as the alternative uses task to ensure a fair assessment of creative ability.


2019 ◽  
Author(s):  
Ron Dekel ◽  
Dov Sagi

AbstractThe processing of a visual stimulus is known to be influenced by the statistics in recent visual history and by the stimulus’ visual surround. Such contextual influences lead to perceptually salient phenomena, such as the tilt aftereffect and the tilt illusion. Despite much research on the influence of an isolated context, it is not clear how multiple, possibly competing sources of contextual influence interact. Here, using psychophysical methods, we compared the combined influence of multiple contexts to the sum of the isolated context influences. The results showed large deviations from linear additivity for adjacent or overlapping contexts, and remarkably, clear additivity when the contexts were sufficiently separated. Specifically, for adjacent or overlapping contexts, the combined effect was often lower than the sum of the isolated component effects (sub-additivity), or was more influenced by one component than another (selection). For contexts that were separated in time (600 ms), the combined effect measured the exact sum of the isolated component effects (in degrees of bias). Overall, the results imply an initial compressive transformation during visual processing, followed by selection between the processed parts.HighlightsNon-linear sub-additivity for increased context area or contrastNon-linear selection between overlapping or adjacent, dissimilar contextsLinear additivity for combinations of temporally separated contexts


Sign in / Sign up

Export Citation Format

Share Document