scholarly journals Prediction Error and Repetition Suppression Have Distinct Effects on Neural Representations of Visual Information

2017 ◽  
Author(s):  
Matthew F. Tang ◽  
Cooper A. Smout ◽  
Ehsan Arabzadeh ◽  
Jason B. Mattingley

AbstractPredictive coding theories argue that recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, where evoked neural activity is attenuated across repeated presentations of the same stimulus. The predictive coding account argues repetition suppression arises because repeated stimuli are expected, whereas non-repeated stimuli are unexpected and thus elicit larger neural responses. Here we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of visual gratings with orientations that were expected either to repeat or change in separate blocks of trials. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas selectivity was unaffected for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Matthew F Tang ◽  
Cooper A Smout ◽  
Ehsan Arabzadeh ◽  
Jason B Mattingley

Predictive coding theories argue that recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, where evoked neural activity is attenuated across repeated presentations of the same stimulus. The predictive coding account argues repetition suppression arises because repeated stimuli are expected, whereas non-repeated stimuli are unexpected and thus elicit larger neural responses. Here, we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of visual gratings with orientations that were expected either to repeat or change in separate blocks of trials. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas selectivity was unaffected for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Stephen J. Gotts ◽  
Shawn C. Milleville ◽  
Alex Martin

AbstractStimulus identification commonly improves with repetition over long delays (“repetition priming”), whereas neural activity commonly decreases (“repetition suppression”). Multiple models have been proposed to explain this brain-behavior relationship, predicting alterations in functional and/or effective connectivity (Synchrony and Predictive Coding models), in the latency of neural responses (Facilitation model), and in the relative similarity of neural representations (Sharpening model). Here, we test these predictions with fMRI during overt and covert naming of repeated and novel objects. While we find partial support for predictions of the Facilitation and Sharpening models in the left fusiform gyrus and left frontal cortex, the data were most consistent with the Synchrony model, with increased coupling between right temporoparietal and anterior cingulate cortex for repeated objects that correlated with priming magnitude across participants. Increased coupling and repetition suppression varied independently, each explaining unique variance in priming and requiring modifications of all current models.


2013 ◽  
Vol 36 (3) ◽  
pp. 221-221 ◽  
Author(s):  
Lars Muckli ◽  
Lucy S. Petro ◽  
Fraser W. Smith

AbstractClark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).


2017 ◽  
Vol 372 (1714) ◽  
pp. 20160105 ◽  
Author(s):  
Rosy Southwell ◽  
Anna Baumann ◽  
Cécile Gal ◽  
Nicolas Barascud ◽  
Karl Friston ◽  
...  

In this series of behavioural and electroencephalography (EEG) experiments, we investigate the extent to which repeating patterns of sounds capture attention. Work in the visual domain has revealed attentional capture by statistically predictable stimuli, consistent with predictive coding accounts which suggest that attention is drawn to sensory regularities. Here, stimuli comprised rapid sequences of tone pips, arranged in regular (REG) or random (RAND) patterns. EEG data demonstrate that the brain rapidly recognizes predictable patterns manifested as a rapid increase in responses to REG relative to RAND sequences. This increase is reminiscent of the increase in gain on neural responses to attended stimuli often seen in the neuroimaging literature, and thus consistent with the hypothesis that predictable sequences draw attention. To study potential attentional capture by auditory regularities, we used REG and RAND sequences in two different behavioural tasks designed to reveal effects of attentional capture by regularity. Overall, the pattern of results suggests that regularity does not capture attention. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2020 ◽  
Author(s):  
Arjen Alink ◽  
Helen Blank

AbstractThe expectation-suppression effect – reduced stimulus-evoked responses to expected stimuli – is widely considered to be an empirical hallmark of reduced prediction errors in the framework of predictive coding. Here we challenge this notion by proposing that this phenomenon can also be explained by a reduced attention effect. Specifically, we argue that reduced responses to predictable stimuli can also be explained by a reduced saliency-driven allocation of attention. To resolve whether expectation suppression is best explained by attention or predictive coding, additional research is needed to determine whether attention effects precede the encoding of expectation violations (or vice versa) and to reveal how expectations change neural representations of stimulus features.


2008 ◽  
Vol 364 (1515) ◽  
pp. 331-339 ◽  
Author(s):  
Andrew J King

The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.


2020 ◽  
Vol 30 (10) ◽  
pp. 5204-5217
Author(s):  
Adrien Witon ◽  
Amirali Shirazibehehsti ◽  
Jennifer Cooke ◽  
Alberto Aviles ◽  
Ram Adapa ◽  
...  

Abstract Two important theories in cognitive neuroscience are predictive coding (PC) and the global workspace (GW) theory. A key research task is to understand how these two theories relate to one another, and particularly, how the brain transitions from a predictive early state to the eventual engagement of a brain-scale state (the GW). To address this question, we present a source-localization of EEG responses evoked by the local-global task—an experimental paradigm that engages a predictive hierarchy, which encompasses the GW. The results of our source reconstruction suggest three phases of processing. The first phase involves the sensory (here auditory) regions of the superior temporal lobe and predicts sensory regularities over a short timeframe (as per the local effect). The third phase is brain-scale, involving inferior frontal, as well as inferior and superior parietal regions, consistent with a global neuronal workspace (GNW; as per the global effect). Crucially, our analysis suggests that there is an intermediate (second) phase, involving modulatory interactions between inferior frontal and superior temporal regions. Furthermore, sedation with propofol reduces modulatory interactions in the second phase. This selective effect is consistent with a PC explanation of sedation, with propofol acting on descending predictions of the precision of prediction errors; thereby constraining access to the GNW.


2019 ◽  
Author(s):  
Sophie-Marie Rostalski ◽  
Catarina Amado ◽  
Gyula Kovács ◽  
Daniel Feuerriegel

AbstractRepeated presentation of a stimulus leads to reductions in measures of neural responses. This phenomenon, termed repetition suppression (RS), has recently been conceptualized using models based on predictive coding, which describe RS as due to expectations that are weighted toward recently-seen stimuli. To evaluate these models, researchers have manipulated the likelihood of stimulus repetition within experiments. They have reported findings that are inconsistent across hemodynamic and electrophysiological measures, and difficult to interpret as clear support or refutation of predictive coding models. We instead investigated a different type of expectation effect that is apparent in stimulus repetition experiments: the difference in one’s ability to predict the identity of repeated, compared to unrepeated, stimuli. In previous experiments that presented pairs of repeated or alternating images, once participants had seen the first stimulus image in a pair, they could form specific expectations about the repeated stimulus image. However they could not form such expectations for the alternating image, which was often randomly chosen from a large stimulus set. To assess the contribution of stimulus predictability effects to previously observed RS, we measured BOLD signals while presenting pairs of repeated and alternating faces. This was done in contexts whereby stimuli in alternating trials were either i.) predictable through statistically learned associations between pairs of stimuli or ii.) chosen randomly and therefore unpredictable. We found that RS in the right FFA was much larger in trials with unpredictable compared to predictable alternating faces. This was primarily due to unpredictable alternating stimuli evoking larger BOLD signals than predictable alternating stimuli. We show that imbalances in stimulus predictability across repeated and alternating trials can greatly inflate measures of RS, or even mimic RS effects. Our findings also indicate that stimulus-specific expectations, as described by predictive coding models, may account for a sizeable portion of observed RS effects.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Ediz Sohoglu ◽  
Matthew H Davis

Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in speech are more strongly represented in neural responses than alternative speech representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between speech signal quality and expectations from prior written text on the quality of neural representations; increased signal quality enhanced neural representations of speech that mismatched with prior expectations, but led to greater suppression of speech that matched prior expectations. This interaction is a unique neural signature of prediction error computations and is apparent in neural responses within 100 ms of speech input. Our findings contribute to the detailed specification of a computational model of speech perception based on predictive coding frameworks.


Sign in / Sign up

Export Citation Format

Share Document