scholarly journals Stable Encoding of Visual Cues in the Mouse Retrosplenial Cortex

2020 ◽  
Vol 30 (8) ◽  
pp. 4424-4437 ◽  
Author(s):  
Anna Powell ◽  
William M Connelly ◽  
Asta Vasalauskaite ◽  
Andrew J D Nelson ◽  
Seralynne D Vann ◽  
...  

Abstract The rodent retrosplenial cortex (RSC) functions as an integrative hub for sensory and motor signals, serving roles in both navigation and memory. While RSC is reciprocally connected with the sensory cortex, the form in which sensory information is represented in the RSC and how it interacts with motor feedback is unclear and likely to be critical to computations involved in navigation such as path integration. Here, we used 2-photon cellular imaging of neural activity of putative excitatory (CaMKII expressing) and inhibitory (parvalbumin expressing) neurons to measure visual and locomotion evoked activity in RSC and compare it to primary visual cortex (V1). We observed stimulus position and orientation tuning, and a retinotopic organization. Locomotion modulation of activity of single neurons, both in darkness and light, was more pronounced in RSC than V1, and while locomotion modulation was strongest in RSC parvalbumin-positive neurons, visual-locomotion integration was found to be more supralinear in CaMKII neurons. Longitudinal measurements showed that response properties were stably maintained over many weeks. These data provide evidence for stable representations of visual cues in RSC that are spatially selective. These may provide sensory data to contribute to the formation of memories of spatial information.

2019 ◽  
Author(s):  
Anna Powell ◽  
William M. Connelly ◽  
Asta Vasalauskaite ◽  
Andrew Nelson ◽  
Seralynne D. Vann ◽  
...  

Abstract The rodent retrosplenial cortex functions as an integrative hub for sensory and motor signals, serving roles in both navigation and memory. While retrosplenial cortex (RSC) is reciprocally connected with the sensory cortex, the form in which sensory information is represented in the retrosplenial cortex and how it interacts with behavioural state is unclear. Here, we used 2-photon cellular imaging of neural activity of putative excitatory (CaMKII expressing) and inhibitory (parvalbumin expressing) neurons to measure visual and running evoked activity in RSC and compare it to primary visual cortex (V1). We found that stimulus position and orientation information was preserved between V1 and RSC, and additionally that positional information was organised topographically. Stimulus directional preference was biased towards nasal-temporal flow. Locomotion modulation of activity of single neurons, both in darkness and light, was also more pronounced in RSC than V1, and strongest in parvalbumin-positive neurons. Longitudinal measurements of single neurons showed that these response features were stably maintained over many weeks. These data provide evidence for stable representations of visual cues in retrosplenial cortex which are highly spatially selective. These may provide sensory data to contribute to the formation of memories of spatial information.


Author(s):  
Lorin Timaeus ◽  
Laura Geid ◽  
Gizem Sancer ◽  
Mathias F. Wernet ◽  
Thomas Hummel

SummaryOne hallmark of the visual system is the strict retinotopic organization from the periphery towards the central brain, spanning multiple layers of synaptic integration. Recent Drosophila studies on the computation of distinct visual features have shown that retinotopic representation is often lost beyond the optic lobes, due to convergence of columnar neuron types onto optic glomeruli. Nevertheless, functional imaging revealed a spatially accurate representation of visual cues in the central complex (CX), raising the question how this is implemented on a circuit level. By characterizing the afferents to a specific visual glomerulus, the anterior optic tubercle (AOTU), we discovered a spatial segregation of topographic versus non-topographic projections from molecularly distinct classes of medulla projection neurons (medullo-tubercular, or MeTu neurons). Distinct classes of topographic versus non-topographic MeTus form parallel channels, terminating in separate AOTU domains. Both types then synapse onto separate matching topographic fields of tubercular-bulbar (TuBu) neurons which relay visual information towards the dendritic fields of central complex ring neurons in the bulb neuropil, where distinct bulb sectors correspond to a distinct ring domain in the ellipsoid body. Hence, peripheral topography is maintained due to stereotypic circuitry within each TuBu class, providing the structural basis for spatial representation of visual information in the central complex. Together with previous data showing rough topography of lobula projections to a different AOTU subunit, our results further highlight the AOTUs role as a prominent relay station for spatial information from the retina to the central brain.


2020 ◽  
Vol 10 (11) ◽  
pp. 854 ◽  
Author(s):  
Rafał Czajkowski ◽  
Bartosz Zglinicki ◽  
Emilia Rejmak ◽  
Witold Konopka

The retrosplenial cortex (RSC) belongs to the spatial memory circuit, but the precise timeline of its involvement and the relation to hippocampal activation have not been sufficiently described. We trained rats in a modified version of the T maze with transparent walls and distant visual cues to induce the formation of allocentric spatial memory. We used two distinct salient contexts associated with opposite sequences of turns. Switching between contexts allowed us to test the ability of animals to utilize spatial information. We then applied a CatFISH approach with a probe directed against the Arc immediate early gene in order to visualize the associated memory engrams in the RSC and the hippocampus. After training, rats displayed two strategies to solve the maze, with half of the animals relying on distant spatial cues (allocentric) and the other half using egocentric strategy. Rats that did not utilize the spatial cues showed higher Arc levels in the RSC compared to the allocentric group. The overlap between the two context engrams in the RSC was similar in both groups. These results show differential involvement of the RSC and hippocampus during spatial memory acquisition and point toward their distinct roles in forming the cognitive maps.


2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


2003 ◽  
Vol 89 (1) ◽  
pp. 390-400 ◽  
Author(s):  
L. H. Zupan ◽  
D. M. Merfeld

Sensory systems often provide ambiguous information. For example, otolith organs measure gravito-inertial force (GIF), the sum of gravitational force and inertial force due to linear acceleration. However, according to Einstein's equivalence principle, a change in gravitational force due to tilt is indistinguishable from a change in inertial force due to translation. Therefore the central nervous system (CNS) must use other sensory cues to distinguish tilt from translation. For example, the CNS might use dynamic visual cues indicating rotation to help determine the orientation of gravity (tilt). This, in turn, might influence the neural processes that estimate linear acceleration, since the CNS might estimate gravity and linear acceleration such that the difference between these estimates matches the measured GIF. Depending on specific sensory information inflow, inaccurate estimates of gravity and linear acceleration can occur. Specifically, we predict that illusory tilt caused by roll optokinetic cues should lead to a horizontal vestibuloocular reflex compensatory for an interaural estimate of linear acceleration, even in the absence of actual linear acceleration. To investigate these predictions, we measured eye movements binocularly using infrared video methods in 17 subjects during and after optokinetic stimulation about the subject's nasooccipital (roll) axis (60°/s, clockwise or counterclockwise). The optokinetic stimulation was applied for 60 s followed by 30 s in darkness. We simultaneously measured subjective roll tilt using a somatosensory bar. Each subject was tested in three different orientations: upright, pitched forward 10°, and pitched backward 10°. Five subjects reported significant subjective roll tilt (>10°) in directions consistent with the direction of the optokinetic stimulation. In addition to torsional optokinetic nystagmus and afternystagmus, we measured a horizontal nystagmus to the right during and following clockwise (CW) stimulation and to the left during and following counterclockwise (CCW) stimulation. These measurements match predictions that subjective tilt in the absence of real tilt should induce a nonzero estimate of interaural linear acceleration and, therefore, a horizontal eye response. Furthermore, as predicted, the horizontal response in the dark was larger for Tilters ( n = 5) than for Non-Tilters ( n= 12).


2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.


Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


2019 ◽  
Vol 25 (Suppl. 1-2) ◽  
pp. 60-71 ◽  
Author(s):  
Nikolaus E. Wolter ◽  
Karen A. Gordon ◽  
Jennifer L. Campos ◽  
Luis D. Vilchez Madrigal ◽  
David D. Pothier ◽  
...  

Introduction: To determine the impact of a head-referenced cochlear implant (CI) stimulation system, BalanCI, on balance and postural control in children with bilateral cochleovestibular loss (BCVL) who use bilateral CI. Methods: Prospective, blinded case-control study. Balance and postural control testing occurred in two settings: (1) quiet clinical setting and (2) immersive realistic virtual environment (Challenging Environment Assessment Laboratory [CEAL], Toronto Rehabilitation Institute). Postural control was assessed in 16 and balance in 10 children with BCVL who use bilateral CI, along with 10 typically developing children. Children with neuromotor, cognitive, or visual deficits that would prevent them from performing the tests were excluded. Children wore the BalanCI, which is a head-mounted device that couples with their CIs through the audio port and provides head-referenced spatial information delivered via the intracochlear electrode array. Postural control was measured by center of pressure (COP) and time to fall using the WiiTM (Nintendo, WA, USA) Balance Board for feet and the BalanCI for head, during the administration of the Modified Clinical Test of Sensory Interaction in Balance (CTSIB-M). The COP of the head and feet were assessed for change by deviation, measured as root mean square around the COP (COP-RMS), rate of deviation (COP-RMS/duration), and rate of path length change from center (COP-velocity). Balance was assessed by the Bruininks-Oseretsky Test of Motor Proficiency 2, balance subtest (BOT-2), specifically, BOT-2 score as well as time to fall/fault. Results: In the virtual environment, children demonstrated more stable balance when using BalanCI as measured by an improvement in BOT-2 scores. In a quiet clinical setting, the use of BalanCI led to improved postural control as demonstrated by significant reductions in COP-RMS and COP-velocity. With the use of BalanCI, the number of falls/faults was significantly reduced and time to fall increased. Conclusions: BalanCI is a simple and effective means of improving postural control and balance in children with BCVL who use bilateral CI. BalanCI could potentially improve the safety of these children, reduce the effort they expend maintaining balance and allow them to take part in more complex balance tasks where sensory information may be limited and/or noisy.


2018 ◽  
Vol 30 (11) ◽  
pp. 1657-1682 ◽  
Author(s):  
Rachel M. Brown ◽  
Virginia B. Penhune

Humans must learn a variety of sensorimotor skills, yet the relative contributions of sensory and motor information to skill acquisition remain unclear. Here we compare the behavioral and neural contributions of perceptual learning to that of motor learning, and we test whether these contributions depend on the expertise of the learner. Pianists and nonmusicians learned to perform novel melodies on a piano during fMRI scanning in four learning conditions: listening (auditory learning), performing without auditory feedback (motor learning), performing with auditory feedback (auditory–motor learning), or observing visual cues without performing or listening (cue-only learning). Visual cues were present in every learning condition and consisted of musical notation for pianists and spatial cues for nonmusicians. Melodies were performed from memory with no visual cues and with auditory feedback (recall) five times during learning. Pianists showed greater improvements in pitch and rhythm accuracy at recall during auditory learning compared with motor learning. Nonmusicians demonstrated greater rhythm improvements at recall during auditory learning compared with all other learning conditions. Pianists showed greater primary motor response at recall during auditory learning compared with motor learning, and response in this region during auditory learning correlated with pitch accuracy at recall and with auditory–premotor network response during auditory learning. Nonmusicians showed greater inferior parietal response during auditory compared with auditory–motor learning, and response in this region correlated with pitch accuracy at recall. Results suggest an advantage for perceptual learning compared with motor learning that is both general and expertise-dependent. This advantage is hypothesized to depend on feedforward motor control systems that can be used during learning to transform sensory information into motor production.


Sign in / Sign up

Export Citation Format

Share Document