scholarly journals Increased influence of periphery on central visual processing in humans during walking

2018 ◽  
Author(s):  
Liyu Cao ◽  
Barbara Händel

AbstractCognitive processes are almost exclusively investigated under highly controlled settings while voluntary body movements are suppressed. However, recent animal work suggests differences in sensory processing between movement states by showing drastically changed neural responses in early visual areas between locomotion and stillness. Does locomotion also modulate visual cortical activity in humans and what are its perceptual consequences? Here, we present converging neurophysiological and behavioural evidence that walking leads to an increased influence of peripheral stimuli on central visual input. This modulation of visual processing due to walking is encompassed by a change in alpha oscillations, which is suggestive of an attentional shift to the periphery during walking. Overall, our study shows that strategies of sensory information processing can differ between movement states. This finding further demonstrates that a comprehensive understanding of human perception and cognition critically depends on the consideration of natural behaviour.

2015 ◽  
Vol 27 (4) ◽  
pp. 832-841 ◽  
Author(s):  
Amanda K. Robinson ◽  
Judith Reinhard ◽  
Jason B. Mattingley

Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.


2021 ◽  
Author(s):  
Jonathan Schaffner ◽  
Philippe Tobler ◽  
Todd Hare ◽  
Rafael Polania

It has generally been presumed that sensory information encoded by a nervous system should be as accurate as its biological limitations allow. However, perhaps counter intuitively, accurate representations of sensory signals do not necessarily maximize the organism's chances of survival. To test this hypothesis, we developed a unified normative framework for fitness-maximizing encoding by combining theoretical insights from neuroscience, computer science, and economics. Initially, we applied predictions of this model to neural responses from large monopolar cells (LMCs) in the blowfly retina. We found that neural codes that maximize reward expectation---and not accurate sensory representations---account for retinal LMC activity. We also conducted experiments in humans and find that early sensory areas flexibly adopt neural codes that promote fitness maximization in a retinotopically-specific manner, which impacted decision behavior. Thus, our results provide evidence that fitness-maximizing rules imposed by the environment are applied at the earliest stages of sensory processing.


2007 ◽  
Vol 97 (2) ◽  
pp. 1633-1641 ◽  
Author(s):  
Lotfi B. Merabet ◽  
Jascha D. Swisher ◽  
Stephanie A. McMains ◽  
Mark A. Halko ◽  
Amir Amedi ◽  
...  

The involvement of occipital cortex in sensory processing is not restricted solely to the visual modality. Tactile processing has been shown to modulate higher-order visual and multisensory integration areas in sighted as well as visually deprived subjects; however, the extent of involvement of early visual cortical areas remains unclear. To investigate this issue, we employed functional magnetic resonance imaging in normally sighted, briefly blindfolded subjects with well-defined visuotopic borders as they tactually explored and rated raised-dot patterns. Tactile task performance resulted in significant activation in primary visual cortex (V1) and deactivation of extrastriate cortical regions V2, V3, V3A, and hV4 with greater deactivation in dorsal subregions and higher visual areas. These results suggest that tactile processing affects occipital cortex via two distinct pathways: a suppressive top-down pathway descending through the visual cortical hierarchy and an excitatory pathway arising from outside the visual cortical hierarchy that drives area V1 directly.


Author(s):  
Antoine Barbot ◽  
Woon-Ju Park ◽  
Ru-Yuan Zhang ◽  
Krystel R Huxlin ◽  
Duje Tadin ◽  
...  

How we see is fundamentally limited by the eye’s optics, which determine retinal image quality and constrain neural processing. Elucidating how long-term exposure to optical defects alters visual processing is vital for understanding the human brain’s capacity for and limits of sensory plasticity. Using adaptive optics to bypass the eye’s optical aberrations, we assessed changes in visual processing in neurotypically-developed adults with keratoconus (KC)—a corneal disease causing severe optical aberrations during adulthood that cannot be fully corrected using conventional methods. As a result, KC patients are chronically exposed to degraded retinal images in their everyday life, making them an ideal model to understand how prolonged exposure to poor optical quality alters visual processing. Here, we show that when tested under similar fully-corrected optical conditions as neurotypical observers, KC patients exhibited altered contrast sensitivity, with impaired sensitivity for fine spatial details and better sensitivity for coarse spatial details. Both gains and losses in contrast sensitivity were more pronounced in patients with poorer habitual optical quality. Moreover, using an equivalent noise paradigm and a computational model of visual processing, we show that these alterations in visual processing are mediated by changes in signal enhancement of spatial frequency selective mechanisms. The present findings uncover fundamental properties of neural compensation mechanisms in response to long-term exposure to optical defects, which alter sensory processing and limit the benefits of improved optics. The outcome is a large-scale functional reorganization favoring the processing of sensory information less affected by the eye’s optics.Significance statementThe eye’s optics represent an intrinsic limit to human visual perception, determining the quality of retinal images. Neural adaptation optimizes the brain’s limited sensory processing capacity to the structure of the degraded retinal inputs, providing an exceptional quality of vision given these optical limitations. Here, we show that prolonged exposure to poor optical quality results in a functional reorganization of visual processing that favors sensory information less affected by the eye’s optics. The present study helps elucidate how optical factors shape the way the brain processes visual information. Notably, the resulting adaptive neural plasticity limits the immediate perceptual benefits of optical interventions, a factor that must be taken into consideration when treating the increasing human population affected by optical defects.


2017 ◽  
Author(s):  
Jean-Rémi King ◽  
Valentin Wyart

AbstractThe canonical computations involved in sensory processing, such as neural adaptation and prediction-error signals, have mainly derived from studies investigating the neural responses elicited by a single stimulus. Here, we test whether these computations can be tracked in a quasi-continuous flow of visual stimulation, by correlating scalp electroencephalography (EEG) recordings to simulations of neuronal populations. Fifteen subjects were presented with ~5,000 visual gratings presented in rapid sequences. Our results show that we can simultaneously decode, from the EEG sensors, up to 4 visual stimuli presented sequentially. Temporal generalization and source analyses reveal that the information contained in each stimulus is processed by a “visual pipeline”: a long cascade of transient processing stages, which can overall encode multiple stimuli at once. Importantly, our data suggest that the early feedforward activity but not the late feedback responses are marked by an adaptation phenomenon. Overall, our approach demonstrates how theoretically-derived computations, as isolated in single-stimulus paradigms, can be generalized to conditions of a continuous flow of sensory information.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Alexander N. Pisarchik ◽  
Vladimir A. Maksimenko ◽  
Andrey V. Andreev ◽  
Nikita S. Frolov ◽  
Vladimir V. Makarov ◽  
...  

AbstractNeuronal brain network is a distributed computing system, whose architecture is dynamically adjusted to provide optimal performance of sensory processing. A small amount of visual information needed effortlessly be processed, activates neural activity in occipital and parietal areas. Conversely, a visual task which requires sustained attention to process a large amount of sensory information, involves a set of long-distance connections between parietal and frontal areas coordinating the activity of these distant brain regions. We demonstrate that while neural interactions result in coherence, the strongest connection is achieved through coherence resonance induced by adjusting intrinsic brain noise.


2012 ◽  
Vol 24 (1) ◽  
pp. 28-38 ◽  
Author(s):  
Stephen J. Johnston ◽  
David E. J. Linden ◽  
Kimron L. Shapiro

If two centrally presented visual stimuli occur within approximately half a second of each other, the second target often fails to be reported correctly. This effect, called the attentional blink (AB; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology, Human Perception and Performance, 18, 849–860, 1992], has been attributed to a resource “bottleneck,” likely arising as a failure of attention during encoding into or retrieval from visual working memory (WM). Here we present participants with a hybrid WM–AB study while they undergo fMRI to provide insight into the neural underpinnings of this bottleneck. Consistent with a WM-based bottleneck account, fronto-parietal brain areas exhibited a WM load-dependent modulation of neural responses during the AB task. These results are consistent with the view that WM and attention share a capacity-limited resource and provide insight into the neural structures that underlie resource allocation in tasks requiring joint use of WM and attention.


2021 ◽  
Author(s):  
Evi Hendrikx ◽  
Jacob Paul ◽  
Martijn van Ackooij ◽  
Nathan van der Stoep ◽  
Ben Harvey

Abstract Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. Tuned neural responses to visual event timing have been found in areas of the association cortices implicated in these processes. Here we ask whether and where the human brain derives these timing-tuned responses from the responses of early visual cortex, which monotonically increase with event duration and frequency. Using 7T fMRI and neural model-based analyses, we find a gradual transition from monotonically increasing to timing-tuned neural responses beginning in area MT/V5. Therefore, successive stages of visual processing gradually derive timing-tuned response components from the inherent modulation of sensory responses by event timing. This additional timing-tuned response component was independent of retinotopic location. We propose that this hierarchical derivation of timing-tuned responses from sensory processing areas quantifies sensory event timing while abstracting temporal representations from the spatial properties of their inputs.


2015 ◽  
Vol 32 ◽  
Author(s):  
CHERYL A. OLMAN

AbstractInferring neural responses from functional magnetic resonance imaging (fMRI) data is challenging. Even if we take advantage of high-field systems to acquire data with submillimeter resolution, we are still acquiring data in which a single datum summarizes the responses of tens of thousands of neurons. Excitation and inhibition, spikes and subthreshold membrane potential modulations, local and long-range computations, and tuned and nonselective responses are mixed together in one signal. With a priori knowledge of the underlying neural population responses, careful experiment design allows us to manipulate the experiment or task design so that subpopulations are selectively modulated, and our experiments can reveal those tuning functions. However, because we want to be able to use fMRI to discover new kinds of tuning functions and selectivity, we cannot limit ourselves to experiments in which we already know what we are looking for. Broadly speaking, analyses that rely on classification of responses that are distributed across the local neural population [multi-voxel pattern analyses (MVPA)] offer the ability to discover new kinds of information representation and selectivities in neural subpopulations. There is, however, no way to determine how the information discovered with MVPA or other analyses is related to the underlying neuronal tuning functions. Therefore, we must continue to rely on behavioral, computational, and animal models to develop theories of information representation in mid-tier visual cortical areas. Once encoding models exist, fMRI can be powerful for testing these a priori models of information representation. As an aide in developing these models, an important contribution that fMRI can make to our understanding of mid-tier visual areas is derived from connectivity analyses and experiments that study information sharing between visual areas. This ability to quantify localized population average responses throughout the brain is the strength we can best leverage to discover new properties of local and long-range neural networks.


2021 ◽  
Author(s):  
João D. Semedo ◽  
Anna I. Jasper ◽  
Amin Zandvakili ◽  
Amir Aschner ◽  
Christian K. Machens ◽  
...  

AbstractBrain function relies on the coordination of activity across multiple, recurrently connected, brain areas. For instance, sensory information encoded in early sensory areas is relayed to, and further processed by, higher cortical areas and then fed back. However, the way in which feedforward and feedback signaling interact with one another is incompletely understood. Here we investigate this question by leveraging simultaneous neuronal population recordings in early and midlevel visual areas (V1-V2 and V1-V4). Using a dimensionality reduction approach, we find that population interactions are feedforward-dominated shortly after stimulus onset and feedback-dominated during spontaneous activity. The population activity patterns most correlated across areas were distinct during feedforward- and feedback-dominated periods. These results suggest that feedforward and feedback signaling rely on separate “channels”, such that feedback signaling does not directly affect activity that is fed forward.


Sign in / Sign up

Export Citation Format

Share Document