scholarly journals High- to low-level decoding does not generally improve perceptual performance

2017 ◽  
Author(s):  
Long Luu ◽  
Cheng Qiu ◽  
Alan A. Stocker

Ding et al. (1) recently proposed that the brain automatically encodes high-level, relative stimulus information (i.e. the ordinal relation between two lines), which it then uses to constrain the decoding of low-level, absolute stimulus features (i.e. when recalling the actual lines orientation). This is an interesting idea that is in line with the self-consistent Bayesian observer model (2, 3) and may have important implications for understanding how the brain processes sensory information. However, the notion suggested in Ding et al. (1) that the brain uses this decoding strategy because it improves perceptual performance is misleading. Here we clarify the decoding model and compare its perceptual performance under various noise and signal conditions.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

AbstractPerceptual decisions are biased by recent perceptual history—a phenomenon termed 'serial dependence.' Here, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions. Participants discriminated whether a brief visual stimulus lay to left/right of the screen center. Following a series of biased ‘prior’ location discriminations, subsequent ‘test’ location discriminations were biased toward the prior choices, even when these were reported via different motor actions (using different keys), and when the prior and test stimuli differed in color. By contrast, prior discriminations about an irrelevant stimulus feature (color) did not substantially influence subsequent location discriminations, even though these were reported via the same motor actions. Additionally, when color (not location) was discriminated, a bias in prior stimulus locations no longer influenced subsequent location discriminations. Although low-level stimuli and motor actions did not trigger serial-dependence on their own, similarity of these features across discriminations boosted the effect. These findings suggest that relevance across perceptual decisions is a key factor for serial dependence. Accordingly, serial dependence likely reflects a high-level mechanism by which the brain predicts and interprets new incoming sensory information in accordance with relevant prior choices.


2021 ◽  
pp. 1-15
Author(s):  
Leor Zmigrod

Abstract Ideological behavior has traditionally been viewed as a product of social forces. Nonetheless, an emerging science suggests that ideological worldviews can also be understood in terms of neural and cognitive principles. The article proposes a neurocognitive model of ideological thinking, arguing that ideological worldviews may be manifestations of individuals’ perceptual and cognitive systems. This model makes two claims. First, there are neurocognitive antecedents to ideological thinking: the brain’s low-level neurocognitive dispositions influence its receptivity to ideological doctrines. Second, there are neurocognitive consequences to ideological engagement: strong exposure and adherence to ideological doctrines can shape perceptual and cognitive systems. This article details the neurocognitive model of ideological thinking and synthesizes the empirical evidence in support of its claims. The model postulates that there are bidirectional processes between the brain and the ideological environment, and so it can address the roles of situational and motivational factors in ideologically motivated action. This endeavor highlights that an interdisciplinary neurocognitive approach to ideologies can facilitate biologically informed accounts of the ideological brain and thus reveal who is most susceptible to extreme and authoritarian ideologies. By investigating the relationships between low-level perceptual processes and high-level ideological attitudes, we can develop a better grasp of our collective history as well as the mechanisms that may structure our political futures.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2018 ◽  
Vol 29 (8) ◽  
pp. 3380-3389
Author(s):  
Timothy J Andrews ◽  
Ryan K Smith ◽  
Richard L Hoggart ◽  
Philip I N Ulrich ◽  
Andre D Gouws

Abstract Individuals from different social groups interpret the world in different ways. This study explores the neural basis of these group differences using a paradigm that simulates natural viewing conditions. Our aim was to determine if group differences could be found in sensory regions involved in the perception of the world or were evident in higher-level regions that are important for the interpretation of sensory information. We measured brain responses from 2 groups of football supporters, while they watched a video of matches between their teams. The time-course of response was then compared between individuals supporting the same (within-group) or the different (between-group) team. We found high intersubject correlations in low-level and high-level regions of the visual brain. However, these regions of the brain did not show any group differences. Regions that showed higher correlations for individuals from the same group were found in a network of frontal and subcortical brain regions. The interplay between these regions suggests a range of cognitive processes from motor control to social cognition and reward are important in the establishment of social groups. These results suggest that group differences are primarily reflected in regions involved in the evaluation and interpretation of the sensory input.


2016 ◽  
Author(s):  
Long Luu ◽  
Alan A Stocker

AbstractIllusions provide a great opportunity to study how perception is affected by both the observer's expectations and the way sensory information is represented1,2,3,4,5,6. Recently, Jazayeri and Movshon7 reported a new and interesting perceptual illusion, demonstrating that the perceived motion direction of a dynamic random dot stimulus is systematically biased when preceded by a motion discrimination judgment. The authors hypothesized that these biases emerge because the brain predominantly relies on those neurons that are most informative for solving the discrimination task8, but then is using the same neural weighting profile for generating the percept. In other words, they argue that these biases are “mistakes” of the brain, resulting from using inappropriate neural read-out weights. While we were able to replicate the illusion for a different visual stimulus (orientation), our new psychophysical data suggest that the above interpretation is likely incorrect: Biases are not caused by a read-out profile optimized for solving the discrimination task but rather by the specific choices subjects make in the discrimination task on any given trial. We formulate this idea as a conditioned Bayesian observer model and show that it can explain the new as well as the original psychophysical data. In this framework, the biases are not caused by mistake but rather by the brain's attempt to remain ‘self-consistent’ in its inference process. Our model establishes a direct connection between the current perceptual illusion and the well-known phenomena of cognitive consistency and dissonance9,10.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


2021 ◽  
Author(s):  
Meng Liu ◽  
Wenshan Dong ◽  
Shaozheng Qin ◽  
Tom Verguts ◽  
Qi Chen

AbstractHuman perception and learning is thought to rely on a hierarchical generative model that is continuously updated via precision-weighted prediction errors (pwPEs). However, the neural basis of such cognitive process and how it unfolds during decision making, remain poorly understood. To investigate this question, we combined a hierarchical Bayesian model (i.e., Hierarchical Gaussian Filter, HGF) with electrophysiological (EEG) recording, while participants performed a probabilistic reversal learning task in alternatingly stable and volatile environments. Behaviorally, the HGF fitted significantly better than two control, non-hierarchical, models. Neurally, low-level and high-level pwPEs were independently encoded by the P300 component. Low-level pwPEs were reflected in the theta (4-8 Hz) frequency band, but high-level pwPEs were not. Furthermore, the expressions of high-level pwPEs were stronger for participants with better HGF fit. These results indicate that the brain employs hierarchical learning, and encodes both low- and high-level learning signals separately and adaptively.


2020 ◽  
Author(s):  
Hao Tam Ho ◽  
David C. Burr ◽  
David Alais ◽  
Maria Concetta Morrone

AbstractTo maintain a continuous and coherent percept over time, the brain makes use of past sensory information to anticipate forthcoming stimuli. We recently showed that auditory experience in the immediate past is propagated through ear-specific reverberations, manifested as rhythmic fluctuations of decision bias at alpha frequency. Here, we apply the same time-resolved behavioural method to investigate how perceptual performance changes over time under conditions of high stimulus expectation, and to examine the effect of unexpected events on behaviour. As in our previous study, participants were required to discriminate the ear-of-origin of a brief monaural pure tone embedded in uncorrelated dichotic white noise. We manipulated stimulus expectation by increasing the target probability in one ear to 80%. Consistent with our earlier findings, performance did not remain constant across trials, but varied rhythmically with delay from noise onset. Specifically, decision bias showed a similar oscillation at ~9 Hz that depended on ear congruency between successive targets. This suggests rhythmic communication of auditory perceptual history occurs early and is not readily influenced by top-down expectations. In addition, we report a novel observation specific to infrequent, unexpected stimuli that gave rise to oscillations in accuracy at ~7.6 Hz one trial after the target occurred in the non-anticipated ear. This new behavioural oscillation may reflect a mechanism for updating the sensory representation once a prediction error has been detected.


2019 ◽  
Author(s):  
Stijn A. Nuiten ◽  
Andrés Canales-Johnson ◽  
Lola Beerendonk ◽  
Nutsa Nanuashvili ◽  
Johannes J. Fahrenfort ◽  
...  

AbstractCognitive control over conflicting sensory input is central to adaptive human behavior. It might therefore not come as a surprise that past research has shown conflict detection in the absence of conscious awareness. This would suggest that the brain may detect conflict fully automatically, and that it can even occur without paying attention. Contrary to this intuition, we show that task-relevance is crucial for conflict detection. Univariate and multivariate analyses on electroencephalographic data from human participants revealed that when auditory stimuli are fully task-irrelevant, the brain disregards conflicting input entirely, whereas the same input elicits strong neural conflict signals when task-relevant. In sharp contrast, stimulus features were still processed, irrespective of task-relevance. These results show that stimulus properties are only integrated to allow conflict to be detected by prefrontal regions when sensory information is task-relevant and therefore suggests an attentional bottleneck at high levels of information analysis.


2021 ◽  
Author(s):  
Ro Julia Robotham ◽  
Sheila Kerry ◽  
Grace E Rice ◽  
Alex Leff ◽  
Matt Lambon Ralph ◽  
...  

Much of the patient literature on the visual recognition of faces, words and objects is based on single case studies of patients selected according to their symptom profile. The Back of the Brain project aims to provide novel insights into the cerebral and cortical architecture underlying visual recognition of complex stimuli by adopting a different approach. A large group of patients was recruited according to their lesion location (in the areas supplied by the posterior cerebral artery) rather than their symptomatology. All patients were assessed with the same battery of sensitive tests of visual perception enabling the identification of dissociations as well as associations between deficits in face, word and object recognition. This paper provides a detailed description of the extensive behavioural test battery that was developed for the Back of the Brain project and that enables assessment of low-level, intermediate and high-level visual perceptual abilities. •Extensive behavioural test battery for assessing low-level, intermediate and high-level visual perception in patients with posterior cerebral artery stroke •Method enabling direct comparison of visual face, word and object processing abilities in patients with posterior cerebral artery stroke


Sign in / Sign up

Export Citation Format

Share Document