scholarly journals Familiarity increases processing speed in the visual system

2019 ◽  
Author(s):  
Mariya E. Manahova ◽  
Eelke Spaak ◽  
Floris P. de Lange

AbstractFamiliarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography (MEG) and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants’ ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system, and this truncation was strongest for the fastest presentation speeds. Moreover, neural processing of the target was stronger when it was preceded by familiar distractors. Taken together, these findings suggest that truncation of neural responses for familiar items may result in stronger processing of relevant target information, resulting in superior perceptual performance.Significance statementThe visual response to familiar input is attenuated more rapidly than for novel input. Here we find that this truncation of the neural response for familiar input is strongest for very fast image presentations. We also find a tentative function for this truncation: the neural response to a target image that is embedded within distractors is much greater when the distractors are familiar than when they are novel. Similarly, target categorization performance is much better when the target is embedded within familiar distractors, and this advantage is most obvious for very fast image presentations. This suggests that neural truncation helps to rapidly put neurons in a state of readiness to respond to new input.

2020 ◽  
Vol 32 (4) ◽  
pp. 722-733 ◽  
Author(s):  
Mariya E. Manahova ◽  
Eelke Spaak ◽  
Floris P. de Lange

Familiarity with a stimulus leads to an attenuated neural response to the stimulus. Alongside this attenuation, recent studies have also observed a truncation of stimulus-evoked activity for familiar visual input. One proposed function of this truncation is to rapidly put neurons in a state of readiness to respond to new input. Here, we examined this hypothesis by presenting human participants with target stimuli that were embedded in rapid streams of familiar or novel distractor stimuli at different speeds of presentation, while recording brain activity using magnetoencephalography and measuring behavioral performance. We investigated the temporal and spatial dynamics of signal truncation and whether this phenomenon bears relationship to participants' ability to categorize target items within a visual stream. Behaviorally, target categorization performance was markedly better when the target was embedded within familiar distractors, and this benefit became more pronounced with increasing speed of presentation. Familiar distractors showed a truncation of neural activity in the visual system. This truncation was strongest for the fastest presentation speeds and peaked in progressively more anterior cortical regions as presentation speeds became slower. Moreover, the neural response evoked by the target was stronger when this target was preceded by familiar distractors. Taken together, these findings demonstrate that item familiarity results in a truncated neural response, is associated with stronger processing of relevant target information, and leads to superior perceptual performance.


2018 ◽  
Author(s):  
Thomas S. A. Wallis ◽  
Christina M. Funke ◽  
Alexander S. Ecker ◽  
Leon A. Gatys ◽  
Felix A. Wichmann ◽  
...  

AbstractWe subjectively perceive our visual field with high fidelity, yet large peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). A recent paper proposed a model of the mid-level ventral visual stream in which neural responses were averaged over an area of space that increased as a function of eccentricity (scaling). Human participants could not discriminate synthesised model images from each other (they were metamers) when scaling was about half the retinal eccentricity. This result implicated ventral visual area V2 and approximated “Bouma’s Law” of crowding. It has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our rich perceptual experience. However, participants in this experiment never saw the original images. We find that participants can easily discriminate real and model-generated images at V2 scaling. Lower scale factors than even V1 receptive fields may be required to generate metamers. Efficiently explaining why scenes look as they do may require incorporating segmentation processes and global organisational constraints in addition to local pooling.


2019 ◽  
Author(s):  
Bruce C. Hansen ◽  
David J. Field ◽  
Michelle R. Greene ◽  
Cassady Olson ◽  
Vladimir Miskovic

AbstractOur understanding of information processing by the mammalian visual system has come through a variety of techniques ranging from psychophysics and fMRI to single unit recording and EEG. Each technique provides unique insights into the processing framework of the early visual system. Here, we focus on the nature of the information that is carried by steady state visual evoked potentials (SSVEPs). To study the information provided by SSVEPs, we presented human participants with a population of natural scenes and measured the relative SSVEP response. Rather than focus on particular features of this signal, we focused on the full state-space of possible responses and investigated how the evoked responses are mapped onto this space. Our results show that it is possible to map the relatively high-dimensional signal carried by SSVEPs onto a 2-dimensional space with little loss. We also show that a simple biologically plausible model can account for a high proportion of the explainable variance (∼73%) in that space. Finally, we describe a technique for measuring the mutual information that is available about images from SSVEPs. The techniques introduced here represent a new approach to understanding the nature of the information carried by SSVEPs. Crucially, this approach is general and can provide a means of comparing results across different neural recording methods. Altogether, our study sheds light on the encoding principles of early vision and provides a much needed reference point for understanding subsequent transformations of the early visual response space to deeper knowledge structures that link different visual environments.


Author(s):  
Tao He ◽  
David Richter ◽  
Zhiguo Wang ◽  
Floris P. de Lange

AbstractBoth spatial and temporal context play an important role in visual perception and behavior. Humans can extract statistical regularities from both forms of context to help processing the present and to construct expectations about the future. Numerous studies have found reduced neural responses to expected stimuli compared to unexpected stimuli, for both spatial and temporal regularities. However, it is largely unclear whether and how these forms of context interact. In the current fMRI study, thirty-three human volunteers were exposed to object stimuli that could be expected or surprising in terms of their spatial and temporal context. We found a reliable independent contribution of both spatial and temporal context in modulating the neural response. Specifically, neural responses to stimuli in expected compared to unexpected contexts were suppressed throughout the ventral visual stream. Interestingly, the modulation by spatial context was stronger in magnitude and more reliable than modulations by temporal context. These results suggest that while both spatial and temporal context serve as a prior that can modulate sensory processing in a similar fashion, predictions of spatial context may be a more powerful modulator in the visual system.Significance StatementBoth temporal and spatial context can affect visual perception, however it is largely unclear if and how these different forms of context interact in modulating sensory processing. When manipulating both temporal and spatial context expectations, we found that they jointly affected sensory processing, evident as a suppression of neural responses for expected compared to unexpected stimuli. Interestingly, the modulation by spatial context was stronger than that by temporal context. Together, our results suggest that spatial context may be a stronger modulator of neural responses than temporal context within the visual system. Thereby, the present study provides new evidence how different types of predictions jointly modulate perceptual processing.


2021 ◽  
Vol 11 (3) ◽  
pp. 330
Author(s):  
Dalton J. Edwards ◽  
Logan T. Trujillo

Traditionally, quantitative electroencephalography (QEEG) studies collect data within controlled laboratory environments that limit the external validity of scientific conclusions. To probe these validity limits, we used a mobile EEG system to record electrophysiological signals from human participants while they were located within a controlled laboratory environment and an uncontrolled outdoor environment exhibiting several moderate background influences. Participants performed two tasks during these recordings, one engaging brain activity related to several complex cognitive functions (number sense, attention, memory, executive function) and the other engaging two default brain states. We computed EEG spectral power over three frequency bands (theta: 4–7 Hz, alpha: 8–13 Hz, low beta: 14–20 Hz) where EEG oscillatory activity is known to correlate with the neurocognitive states engaged by these tasks. Null hypothesis significance testing yielded significant EEG power effects typical of the neurocognitive states engaged by each task, but only a beta-band power difference between the two background recording environments during the default brain state. Bayesian analysis showed that the remaining environment null effects were unlikely to reflect measurement insensitivities. This overall pattern of results supports the external validity of laboratory EEG power findings for complex and default neurocognitive states engaged within moderately uncontrolled environments.


2010 ◽  
Vol 21 (7) ◽  
pp. 931-937 ◽  
Author(s):  
C. Nathan DeWall ◽  
Geoff MacDonald ◽  
Gregory D. Webster ◽  
Carrie L. Masten ◽  
Roy F. Baumeister ◽  
...  

Pain, whether caused by physical injury or social rejection, is an inevitable part of life. These two types of pain—physical and social—may rely on some of the same behavioral and neural mechanisms that register pain-related affect. To the extent that these pain processes overlap, acetaminophen, a physical pain suppressant that acts through central (rather than peripheral) neural mechanisms, may also reduce behavioral and neural responses to social rejection. In two experiments, participants took acetaminophen or placebo daily for 3 weeks. Doses of acetaminophen reduced reports of social pain on a daily basis (Experiment 1). We used functional magnetic resonance imaging to measure participants’ brain activity (Experiment 2), and found that acetaminophen reduced neural responses to social rejection in brain regions previously associated with distress caused by social pain and the affective component of physical pain (dorsal anterior cingulate cortex, anterior insula). Thus, acetaminophen reduces behavioral and neural responses associated with the pain of social rejection, demonstrating substantial overlap between social and physical pain.


2004 ◽  
Vol 16 (9) ◽  
pp. 1669-1679 ◽  
Author(s):  
Emily D. Grossman ◽  
Randolph Blake ◽  
Chai-Youn Kim

Individuals improve with practice on a variety of perceptual tasks, presumably reflecting plasticity in underlying neural mechanisms. We trained observers to discriminate biological motion from scrambled (nonbiological) motion and examined whether the resulting improvement in perceptual performance was accompanied by changes in activation within the posterior superior temporal sulcus and the fusiform “face area,” brain areas involved in perception of biological events. With daily practice, initially naive observers became more proficient at discriminating biological from scrambled animations embedded in an array of dynamic “noise” dots, with the extent of improvement varying among observers. Learning generalized to animations never seen before, indicating that observers had not simply memorized specific exemplars. In the same observers, neural activity prior to and following training was measured using functional magnetic resonance imaging. Neural activity within the posterior superior temporal sulcus and the fusiform “face area” reflected the participants' learning: BOLD signals were significantly larger after training in response both to animations experienced during training and to novel animations. The degree of learning was positively correlated with the amplitude changes in BOLD signals.


2018 ◽  
Vol 30 (12) ◽  
pp. 1883-1901 ◽  
Author(s):  
Nicolò F. Bernardi ◽  
Floris T. Van Vugt ◽  
Ricardo Ruy Valle-Mena ◽  
Shahabeddin Vahdat ◽  
David J. Ostry

The relationship between neural activation during movement training and the plastic changes that survive beyond movement execution is not well understood. Here we ask whether the changes in resting-state functional connectivity observed following motor learning overlap with the brain networks that track movement error during training. Human participants learned to trace an arched trajectory using a computer mouse in an MRI scanner. Motor performance was quantified on each trial as the maximum distance from the prescribed arc. During learning, two brain networks were observed, one showing increased activations for larger movement error, comprising the cerebellum, parietal, visual, somatosensory, and cortical motor areas, and the other being more activated for movements with lower error, comprising the ventral putamen and the OFC. After learning, changes in brain connectivity at rest were found predominantly in areas that had shown increased activation for larger error during task, specifically the cerebellum and its connections with motor, visual, and somatosensory cortex. The findings indicate that, although both errors and accurate movements are important during the active stage of motor learning, the changes in brain activity observed at rest primarily reflect networks that process errors. This suggests that error-related networks are represented in the initial stages of motor memory formation.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


Sign in / Sign up

Export Citation Format

Share Document