scholarly journals Transformation of population code from dLGN to V1 facilitates linear decoding

2019 ◽  
Author(s):  
N. Alex Cayco Gajic ◽  
Séverine Durand ◽  
Michael Buice ◽  
Ramakrishnan Iyer ◽  
Clay Reid ◽  
...  

SummaryHow neural populations represent sensory information, and how that representation is transformed from one brain area to another, are fundamental questions of neuroscience. The dorsolateral geniculate nucleus (dLGN) and primary visual cortex (V1) represent two distinct stages of early visual processing. Classic sparse coding theories propose that V1 neurons represent local features of images. More recent theories have argued that the visual pathway transforms visual representations to become increasingly linearly separable. To test these ideas, we simultaneously recorded the spiking activity of mouse dLGN and V1 in vivo. We find strong evidence for both sparse coding and linear separability theories. Surprisingly, the correlations between neurons in V1 (but not dLGN) were shaped as to be irrelevant for stimulus decoding, a feature which we show enables linear separability. Therefore, our results suggest that the dLGN-V1 transformation reshapes correlated variability in a manner that facilitates linear decoding while producing a sparse code.

2015 ◽  
Vol 27 (4) ◽  
pp. 832-841 ◽  
Author(s):  
Amanda K. Robinson ◽  
Judith Reinhard ◽  
Jason B. Mattingley

Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.


2021 ◽  
Author(s):  
Chaojuan Yang ◽  
Yonglu Tian ◽  
Feng Su ◽  
Yangzhen Wang ◽  
Mengna Liu ◽  
...  

AbstractMany people affected by fragile X syndrome (FXS) and autism spectrum disorders have sensory processing deficits, such as hypersensitivity to auditory, tactile, and visual stimuli. Like FXS in humans, loss of Fmr1 in rodents also cause sensory, behavioral, and cognitive deficits. However, the neural mechanisms underlying sensory impairment, especially vision impairment, remain unclear. It remains elusive whether the visual processing deficits originate from corrupted inputs, impaired perception in the primary sensory cortex, or altered integration in the higher cortex, and there is no effective treatment. In this study, we used a genetic knockout mouse model (Fmr1KO), in vivo imaging, and behavioral measurements to show that the loss of Fmr1 impaired signal processing in the primary visual cortex (V1). Specifically, Fmr1KO mice showed enhanced responses to low-intensity stimuli but normal responses to high-intensity stimuli. This abnormality was accompanied by enhancements in local network connectivity in V1 microcircuits and increased dendritic complexity of V1 neurons. These effects were ameliorated by the acute application of GABAA receptor activators, which enhanced the activity of inhibitory neurons, or by reintroducing Fmr1 gene expression in knockout V1 neurons in both juvenile and young-adult mice. Overall, V1 plays an important role in the visual abnormalities of Fmr1KO mice and it could be possible to rescue the sensory disturbances in developed FXS and autism patients.


2021 ◽  
Author(s):  
Jeremie Sibille ◽  
Carolin Gehr ◽  
Kai Lun Teh ◽  
Jens Kremkow

The superior colliculus (SC) is a midbrain structure that plays a central role in visual processing. Although we have learned a considerable amount about the function of single SC neurons, the way in which sensory information is represented and processed on the population level in awake behaving animals and across a large region of the retinotopic map is still largely unknown. Partially because the SC is anatomically located below the cortical sheet and the transverse sinus, it is technically difficult to measure neuronal activity from a large population of neurons in SC. To address this, we propose a tangential recording configuration using high-density electrode probes (Neuropixels) in mouse SC in vivo that permits a large number of recording sites (~200) accessibility inside the SC circuitry. This approach thereby provides a unique opportunity to measure the activity of SC neuronal populations composing up to ~2 mm of SC tissue and characterized by receptive fields covering an extended region in the visual field. Here we describe how to perform tangential recordings along the anterior-posterior and the medio-lateral axis of the mouse SC in vivo and how to combine this approach with optogenetic tools for cell-type identification on the population level.


2018 ◽  
Author(s):  
Aram Giahi Saravani ◽  
Kiefer J. Forseth ◽  
Nitin Tandon ◽  
Xaq Pitkow

AbstractBrain computations involve multiple processes by which sensory information is encoded and transformed to drive behavior. These computations are thought to be mediated by dynamic interactions between populations of neurons. Here we demonstrate that human brains exhibit a reliable sequence of neural interactions during speech production. We use an autoregressive hidden Markov model to identify dynamical network states exhibited by electrocorticographic signals recorded from human neurosurgical patients. Our method resolves dynamic latent network states on a trial-by-trial basis. We characterize individual network states according to the patterns of directional information flow between cortical regions of interest. These network states occur consistently and in a specific, interpretable sequence across trials and subjects: a fixed-length visual processing state is followed by a variable-length language state, and then by a terminal articulation state. This empirical evidence validates classical psycholinguistic theories that have posited such intermediate states during speaking. It further reveals these state dynamics are not localized to one brain area or one sequence of areas, but are instead a network phenomenon.


2021 ◽  
Author(s):  
Matthijs N. oude Lohuis ◽  
Alexis Cerván Cantón ◽  
Cyriel M. A. Pennartz ◽  
Umberto Olcese

SummaryOver the past few years, the various areas that surround the primary visual cortex in the mouse have been associated with many functions, ranging from higher-order visual processing to decision making. Recently, some studies have shown that higher-order visual areas influence the activity of the primary visual cortex, refining its processing capabilities. Here we studied how in vivo optogenetic inactivation of two higher-order visual areas with different functional properties affects responses evoked by moving bars in the primary visual cortex. In contrast with the prevailing view, our results demonstrate that distinct higher-order visual areas similarly modulate early visual processing. In particular, these areas broaden stimulus responsiveness in the primary visual cortex, by amplifying sensory-evoked responses for stimuli not moving along the orientation preferred by individual neurons. Thus, feedback from higher-order visual areas amplifies V1 responses to non-preferred stimuli, which may aid their detection.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
MohammadMehdi Kafashan ◽  
Anna W. Jaffe ◽  
Selmaan N. Chettih ◽  
Ramon Nogueira ◽  
Iñigo Arandia-Romero ◽  
...  

AbstractHow is information distributed across large neuronal populations within a given brain area? Information may be distributed roughly evenly across neuronal populations, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigate how sensory information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex. We show that information scales sublinearly due to correlated noise in these populations. We compartmentalized noise correlations into information-limiting and nonlimiting components, then extrapolate to predict how information grows with even larger neural populations. We predict that tens of thousands of neurons encode 95% of the information about visual stimulus direction, much less than the number of neurons in primary visual cortex. These findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most sensory information from smaller subpopulations.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


The construction of directionally selective units, and their use in the processing of visual motion, are considered. The zero crossings of ∇ 2 G(x, y) ∗ I(x, y) are located, as in Marr & Hildreth (1980). That is, the image is filtered through centre-surround receptive fields, and the zero values in the output are found. In addition, the time derivative ∂[∇ 2 G(x, y) ∗ l(x, y) ]/∂ t is measured at the zero crossings, and serves to constrain the local direction of motion to within 180°. The direction of motion can be determined in a second stage, for example by combining the local constraints. The second part of the paper suggests a specific model of the information processing by the X and Y cells of the retina and lateral geniculate nucleus, and certain classes of cortical simple cells. A number of psychophysical and neurophysiological predictions are derived from the theory.


Sign in / Sign up

Export Citation Format

Share Document