scholarly journals How prior knowledge prepares perception: Alpha-band oscillations carry perceptual expectations and influence early visual responses

2016 ◽  
Author(s):  
Jason Samaha ◽  
Bastien Boutonnet ◽  
Bradley R. Postle ◽  
Gary Lupyan

AbstractPerceptual experience results from a complex interplay of bottom-up input and prior knowledge about the world, yet the extent to which knowledge affects perception, the neural mechanisms underlying these effects, and the stages of processing at which these two sources of information converge, are still unclear. In several experiments we show that language, in the form of verbal cues, both aids recognition of ambiguous “Mooney” images and improves objective visual discrimination performance in a match/non-match task. We then used electroencephalography (EEG) to better understand the mechanisms of this effect. The improved discrimination of images previously labeled was accompanied by a larger occipital-parietal P1 evoked response to the meaningful versus meaningless target stimuli. Time-frequency analysis of the interval between the cue and the target stimulus revealed increases in the power of posterior alpha-band (8-14 Hz) oscillations when the meaning of the stimuli to be compared was trained. The magnitude of the pre-target alpha difference and the P1 amplitude difference were positively correlated across individuals. These results suggest that prior knowledge prepares the brain for upcoming perception via the modulation of alpha-band oscillations, and that this preparatory state influences early (~120 ms) stages of visual processing.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zakaria Djebbara ◽  
Lars Brorson Fich ◽  
Klaus Gramann

AbstractAction is a medium of collecting sensory information about the environment, which in turn is shaped by architectural affordances. Affordances characterize the fit between the physical structure of the body and capacities for movement and interaction with the environment, thus relying on sensorimotor processes associated with exploring the surroundings. Central to sensorimotor brain dynamics, the attentional mechanisms directing the gating function of sensory signals share neuronal resources with motor-related processes necessary to inferring the external causes of sensory signals. Such a predictive coding approach suggests that sensorimotor dynamics are sensitive to architectural affordances that support or suppress specific kinds of actions for an individual. However, how architectural affordances relate to the attentional mechanisms underlying the gating function for sensory signals remains unknown. Here we demonstrate that event-related desynchronization of alpha-band oscillations in parieto-occipital and medio-temporal regions covary with the architectural affordances. Source-level time–frequency analysis of data recorded in a motor-priming Mobile Brain/Body Imaging experiment revealed strong event-related desynchronization of the alpha band to originate from the posterior cingulate complex, the parahippocampal region as well as the occipital cortex. Our results firstly contribute to the understanding of how the brain resolves architectural affordances relevant to behaviour. Second, our results indicate that the alpha-band originating from the occipital cortex and parahippocampal region covaries with the architectural affordances before participants interact with the environment, whereas during the interaction, the posterior cingulate cortex and motor areas dynamically reflect the affordable behaviour. We conclude that the sensorimotor dynamics reflect behaviour-relevant features in the designed environment.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2015 ◽  
Vol 112 (40) ◽  
pp. E5523-E5532 ◽  
Author(s):  
Peter T. Weir ◽  
Michael H. Dickinson

Although anatomy is often the first step in assigning functions to neural structures, it is not always clear whether architecturally distinct regions of the brain correspond to operational units. Whereas neuroarchitecture remains relatively static, functional connectivity may change almost instantaneously according to behavioral context. We imaged panneuronal responses to visual stimuli in a highly conserved central brain region in the fruit fly, Drosophila, during flight. In one substructure, the fan-shaped body, automated analysis revealed three layers that were unresponsive in quiescent flies but became responsive to visual stimuli when the animal was flying. The responses of these regions to a broad suite of visual stimuli suggest that they are involved in the regulation of flight heading. To identify the cell types that underlie these responses, we imaged activity in sets of genetically defined neurons with arborizations in the targeted layers. The responses of this collection during flight also segregated into three sets, confirming the existence of three layers, and they collectively accounted for the panneuronal activity. Our results provide an atlas of flight-gated visual responses in a central brain circuit.


2021 ◽  
Author(s):  
Wei Dou ◽  
Audrey Morrow ◽  
Luca Iemi ◽  
Jason Samaha

The neurogenesis of alpha-band (8-13 Hz) activity has been characterized across many different animal experiments. However, the functional role that alpha oscillations play in perception and behavior has largely been attributed to two contrasting hypotheses, with human evidence in favor of either (or both or neither) remaining sparse. On the one hand, alpha generators have been observed in relay sectors of the visual thalamus and are postulated to phasically inhibit afferent visual input in a feedforward manner 1-4. On the other hand, evidence also suggests that the direction of influence of alpha activity propagates backwards along the visual hierarchy, reflecting a feedback influence upon the visual cortex 5-9. The primary source of human evidence regarding the role of alpha phase in visual processing has been on perceptual reports 10-16, which could be modulated either by feedforward or feedback alpha activity. Thus, although these two hypotheses are not mutually exclusive, human evidence clearly supporting either one is lacking. Here, we present human subjects with large, high-contrast visual stimuli that elicit robust C1 event-related potentials (ERP), which peak between 70-80 milliseconds post-stimulus and are thought to reflect afferent primary visual cortex (V1) input 17-20. We find that the phase of ongoing alpha oscillations modulates the global field power (GFP) of the EEG during this first volley of stimulus processing (the C1 time-window). On the standard assumption 21-23 that this early activity reflects postsynaptic potentials being relayed to visual cortex from the thalamus, our results suggest that alpha phase gates visual responses during the first feed-forward sweep of processing.


2016 ◽  
Author(s):  
Dylan R Muir ◽  
Patricia Molina-Luna ◽  
Morgane M Roth ◽  
Fritjof Helmchen ◽  
Björn M Kampa

AbstractLocal excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by a assuming ‘feature binding’ connectivity. Unlike under the ‘like-to-like’ scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.Author summaryThe brain is a highly complex structure, with abundant connectivity between nearby neurons in the neocortex, the outermost and evolutionarily most recent part of the brain. Although the network architecture of the neocortex can appear disordered, connections between neurons seem to follow certain rules. These rules most likely determine how information flows through the neural circuits of the brain, but the relationship between particular connectivity rules and the function of the cortical network is not known. We built models of visual cortex in the mouse, assuming distinct rules for connectivity, and examined how the various rules changed the way the models responded to visual stimuli. We also recorded responses to visual stimuli of populations of neurons in anaesthetised mice, and compared these responses with our model predictions. We found that connections in neocortex probably follow a connectivity rule that groups together neurons that differ in simple visual properties, to build more complex representations of visual stimuli. This finding is surprising because primary visual cortex is assumed to support mainly simple visual representations. We show that including specific rules for non-random connectivity in cortical models, and precisely measuring those rules in cortical tissue, is essential to understanding how information is processed by the brain.


2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Jeong Woo Choi ◽  
Kyung Hwan Kim

Interpersonal communication is based on questions and answers, and the most useful and simplest case is the binary “yes or no” question and answer. The purpose of this study is to show that it is possible to decode intentions on “yes” or “no” answers from multichannel single-trial electroencephalograms, which were recorded while covertly answering to self-referential questions with either “yes” or “no.” The intention decoding algorithm consists of a common spatial pattern and support vector machine, which are employed for the feature extraction and pattern classification, respectively, after dividing the overall time-frequency range into subwindows of 200 ms × 2 Hz. The decoding accuracy using the information within each subwindow was investigated to find useful temporal and spectral ranges and found to be the highest for 800–1200 ms in the alpha band or 200–400 ms in the theta band. When the features from multiple subwindows were utilized together, the accuracy was significantly increased up to ∼86%. The most useful features for the “yes/no” discrimination was found to be focused in the right frontal region in the theta band and right centroparietal region in the alpha band, which may reflect the violation of autobiographic facts and higher cognitive load for “no” compared to “yes.” Our task requires the subjects to answer self-referential questions just as in interpersonal conversation without any self-regulation of the brain signals or high cognitive efforts, and the “yes” and “no” answers are decoded directly from the brain activities. This implies that the “mind reading” in a true sense is feasible. Beyond its contribution in fundamental understanding of the neural mechanism of human intention, the decoding of “yes” or “no” from brain activities may eventually lead to a natural brain-computer interface.


2015 ◽  
Vol 6 (1) ◽  
pp. 187-197
Author(s):  
Ahmed Almurshedi ◽  
Abd Khamim Ismail

AbstractPerceptual decision making depends on the choices available for the presented task. Most event-related potential (ERP) experiments are designed with two options, such as YES or NO. In some cases, however, subjects may become confused about the presented task in such a way that they cannot provide a behavioral response. This study aims to put subjects into such a puzzled state in order to address the following questions: How does the brain respond during puzzling moments? And what is the brain’s response to a non-answerable task? To address these questions, ERP were acquired from the brain during a scintillation grid illusion task. The subjects were required to count the number of illusory dots, a task that was impossible to perform. The results showed the presence of N130 over the parietal area during the puzzling task. Coherency among the brain hemispheres was enhanced with the complexity of the task. The neural generators’ source localizations were projected to a multimodal complex covering the left postcentral gyrus, supramarginal gyrus, and angular gyrus. This study concludes that the brain component N130 is strongly related to perception in a puzzling task network but not the visual processing network.


Author(s):  
Tanaz Molapour ◽  
Cindy C Hagan ◽  
Brian Silston ◽  
Haiyan Wu ◽  
Maxwell Ramstead ◽  
...  

ABSTRACT The social environment presents the human brain with the most complex of information processing demands. The computations that the brain must perform occur in parallel, combine social and nonsocial cues, produce verbal and non-verbal signals, and involve multiple cognitive systems; including memory, attention, emotion, learning. This occurs dynamically and at timescales ranging from milliseconds to years. Here, we propose that during social interactions, seven core operations interact to underwrite coherent social functioning; these operations accumulate evidence efficiently – from multiple modalities – when inferring what to do next. We deconstruct the social brain and outline the key components entailed for successful human social interaction. These include (1) social perception; (2) social inferences, such as mentalizing; (3) social learning; (4) social signaling through verbal and non-verbal cues; (5) social drives (e.g., how to increase one’s status); (6) determining the social identity of agents, including oneself; and (7) minimizing uncertainty within the current social context by integrating sensory signals and inferences. We argue that while it is important to examine these distinct aspects of social inference, to understand the true nature of the human social brain, we must also explain how the brain integrates information from the social world.


2009 ◽  
Vol 102 (6) ◽  
pp. 3469-3480 ◽  
Author(s):  
H. M. Van Ettinger-Veenstra ◽  
W. Huijbers ◽  
T. P. Gutteling ◽  
M. Vink ◽  
J. L. Kenemans ◽  
...  

It is well known that parts of a visual scene are prioritized for visual processing, depending on the current situation. How the CNS moves this focus of attention across the visual image is largely unknown, although there is substantial evidence that preparation of an action is a key factor. Our results support the view that direct corticocortical feedback connections from frontal oculomotor areas to the visual cortex are responsible for the coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)–guided transcranial magnetic stimulation (TMS) was applied to the frontal eye fields (FEFs) and intraparietal sulcus (IPS). A single pulse was delivered 60, 30, or 0 ms before a discrimination target was presented at, or next to, the target of a saccade in preparation. Results showed that the known enhancement of discrimination performance specific to locations to which eye movements are being prepared was enhanced by early TMS on the FEF contralateral to eye movement direction, whereas TMS on the IPS resulted in a general performance increase. The current findings indicate that the FEF affects selective visual processing within the visual cortex itself through direct feedback projections.


2021 ◽  
Author(s):  
Hugh McGovern ◽  
Marte Otten

Bayesian processing has become a popular framework by which to understand cognitive processes. However, relatively little has been done to understand how Bayesian processing in the brain can be applied to understanding intergroup cognition. We assess how categorization and evaluation processes unfold based on priors about the ethnic outgroup being perceived. We then consider how the precision of prior knowledge about groups differentially influence perception depending on how the information about that group was learned affects the way in which it is recalled. Finally, we evaluate the mechanisms of how humans learn information about other ethnic groups and assess how the method of learning influences future intergroup perception. We suggest that a predictive processing framework for assessing prejudice could help accounting for seemingly disparate findings on intergroup bias from social neuroscience, social psychology, and evolutionary psychology. Such an integration has important implications for future research on prejudice at the interpersonal, intergroup, and societal levels.


Sign in / Sign up

Export Citation Format

Share Document