scholarly journals Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades

2019 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

AbstractMost sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies demonstrated saccade-related activity in the dorsal pulvinar and we have recently shown that many neurons exhibit post-saccadic spatial preference long after the saccade execution. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°/0°/15°) in monkeys performing a visually-cued memory saccade task. We found two main types of gaze dependence. First, ∼50% of neurons showed an effect of static gaze direction during initial and post-saccadic fixation. Eccentric gaze preference was more common than straight ahead. Some of these neurons were not visually-responsive and might be primarily signaling the position of the eyes in the orbit, or coding foveal targets in a head/body/world-centered reference frame. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to post-saccadic target fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in non-retinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually-guided eye and limb movements.New & NoteworthyWork on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. Here we show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.

2020 ◽  
Vol 123 (1) ◽  
pp. 367-391 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.


1998 ◽  
Vol 80 (5) ◽  
pp. 2274-2294 ◽  
Author(s):  
Eliana M. Klier ◽  
J. Douglas Crawford

Klier, Eliana M. and J. Douglas Crawford. Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. J. Neurophysiol. 80: 2274–2294, 1998. A recent theoretical investigation has demonstrated that three-dimensional (3-D) eye position dependencies in the geometry of retinal stimulation must be accounted for neurally (i.e., in a visuomotor reference frame transformation) if saccades are to be both accurate and obey Listing's law from all initial eye positions. Our goal was to determine whether the human saccade generator correctly implements this eye-to-head reference frame transformation (RFT), or if it approximates this function with a visuomotor look-up table (LT). Six head-fixed subjects participated in three experiments in complete darkness. We recorded 60° horizontal saccades between five parallel pairs of lights, over a vertical range of ±40° ( experiment 1), and 30° radial saccades from a central target, with the head upright or tilted 45° clockwise/counterclockwise to induce torsional ocular counterroll, under both binocular and monocular viewing conditions ( experiments 2 and 3). 3-D eye orientation and oculocentric target direction (i.e., retinal error) were computed from search coil signals in the right eye. Experiment 1: as predicted, retinal error was a nontrivial function of both target displacement in space and 3-D eye orientation (e.g., horizontally displaced targets could induce horizontal or oblique retinal errors, depending on eye position). These data were input to a 3-D visuomotor LT model, which implemented Listing's law, but predicted position-dependent errors in final gaze direction of up to 19.8°. Actual saccades obeyed Listing's law but did not show the predicted pattern of inaccuracies in final gaze direction, i.e., the slope of actual error, as a function of predicted error, was only −0.01 ± 0.14 (compared with 0 for RFT model and 1.0 for LT model), suggesting near-perfect compensation for eye position. Experiments 2 and 3: actual directional errors from initial torsional eye positions were only a fraction of those predicted by the LT model (e.g., 32% for clockwise and 33% for counterclockwise counterroll during binocular viewing). Furthermore, any residual errors were immediately reduced when visual feedback was provided during saccades. Thus, other than sporadic miscalibrations for torsion, saccades were accurate from all 3-D eye positions. We conclude that 1) the hypothesis of a visuomotor look-up table for saccades fails to account even for saccades made directly toward visual targets, but rather, 2) the oculomotor system takes 3-D eye orientation into account in a visuomotor reference frame transformation. This transformation is probably implemented physiologically between retinotopically organized saccade centers (in cortex and superior colliculus) and the brain stem burst generator.


2021 ◽  
Vol 14 ◽  
Author(s):  
Charlotte Doussot ◽  
Olivier J. N. Bertrand ◽  
Martin Egelhaaf

Bumblebees perform complex flight maneuvers around the barely visible entrance of their nest upon their first departures. During these flights bees learn visual information about the surroundings, possibly including its spatial layout. They rely on this information to return home. Depth information can be derived from the apparent motion of the scenery on the bees' retina. This motion is shaped by the animal's flight and orientation: Bees employ a saccadic flight and gaze strategy, where rapid turns of the head (saccades) alternate with flight segments of apparently constant gaze direction (intersaccades). When during intersaccades the gaze direction is kept relatively constant, the apparent motion contains information about the distance of the animal to environmental objects, and thus, in an egocentric reference frame. Alternatively, when the gaze direction rotates around a fixed point in space, the animal perceives the depth structure relative to this pivot point, i.e., in an allocentric reference frame. If the pivot point is at the nest-hole, the information is nest-centric. Here, we investigate in which reference frames bumblebees perceive depth information during their learning flights. By precisely tracking the head orientation, we found that half of the time, the head appears to pivot actively. However, only few of the corresponding pivot points are close to the nest entrance. Our results indicate that bumblebees perceive visual information in several reference frames when they learn about the surroundings of a behaviorally relevant location.


2008 ◽  
Vol 99 (5) ◽  
pp. 2470-2478 ◽  
Author(s):  
André Kaminiarz ◽  
Bart Krekelberg ◽  
Frank Bremmer

The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.


2010 ◽  
Vol 22 (12) ◽  
pp. 2836-2849 ◽  
Author(s):  
Klaus Gramann ◽  
Julie Onton ◽  
Davide Riccobon ◽  
Hermann J. Mueller ◽  
Stanislav Bardins ◽  
...  

Maintaining spatial orientation while travelling requires integrating spatial information encountered from an egocentric viewpoint with accumulated information represented within egocentric and/or allocentric reference frames. Here, we report changes in high-density EEG activity during a virtual tunnel passage task in which subjects respond to a postnavigation homing challenge in distinctly different ways—either compatible with a continued experience of the virtual environment from a solely egocentric perspective or as if also maintaining their original entrance orientation, indicating use of a parallel allocentric reference frame. By spatially filtering the EEG data using independent component analysis, we found that these two equal subject subgroups exhibited differences in EEG power spectral modulation during tunnel passages in only a few cortical areas. During tunnel turns, stronger alpha blocking occurred only in or near right primary visual cortex of subjects whose homing responses were compatible with continued use of an egocentric reference frame. In contrast, approaching and during tunnel turns, subjects who responded in a way compatible with use of an allocentric reference frame exhibited stronger alpha blocking of occipito-temporal, bilateral inferior parietal, and retrosplenial cortical areas, all areas implicated by hemodynamic imaging and neuropsychological observation in construction and maintenance of an allocentric reference frame. We conclude that in these subjects, stronger activation of retrosplenial and related cortical areas during turns support a continuous translation of egocentrically experienced visual flow into an allocentric model of their virtual position and movement.


2018 ◽  
Author(s):  
Eugene Poh ◽  
Jordan A. Taylor

AbstractStudies on generalization of learned visuomotor perturbations has generally focused on whether learning is coded in extrinsic or intrinsic reference frames. This dichotomy, however, is challenged by recent findings showing that learning is represented in a mixed reference frame. Overlooked in this framework is how learning is the result of multiple processes, such as explicit re-aiming and implicit motor adaptation. Therefore the proposed mixed representation may simply reflect the superposition of explicit and implicit generalization functions, each represented in different reference frames. Here, we characterized the individual generalization functions of explicit and implicit learning in relative isolation to determine if their combination could predict the overall generalization function when both processes are in operation. We modified the form of feedback in a visuomotor rotation task to isolate explicit and implicit learning, and tested generalization across different limb postures to dissociate the extrinsic and intrinsic representations. We found that explicit generalization occurred predominantly in an extrinsic reference frame but the amplitude was reduced with postural changes, whereas implicit generalization was phase-shifted according to a mixed reference frame representation and amplitude was maintained. A linear combination of individual explicit and implicit generalization functions accounted for nearly 85% of the variance associated with the generalization function in a typical visuomotor rotation task, where both processes are in operation. This suggests that each form of learning results from a mixed representation with distinct extrinsic and intrinsic contributions, and the combination of these features shape the generalization pattern observed at novel limb postures.New and noteworthyGeneralization following learning in visuomotor adaptation tasks can reflect how the brain represents what it learns. In this study, we isolated explicit and implicit forms of learning, and showed that they are derived from a mixed reference frame representation with distinct extrinsic and intrinsic contributions. Furthermore, we showed that the overall generalization pattern at novel workspaces is due to the superposition of independent generalization effects developed by explicit and implicit learning processes.


2018 ◽  
Vol 15 (3) ◽  
pp. 229-236 ◽  
Author(s):  
Gennaro Ruggiero ◽  
Alessandro Iavarone ◽  
Tina Iachini

Objective: Deficits in egocentric (subject-to-object) and allocentric (object-to-object) spatial representations, with a mainly allocentric impairment, characterize the first stages of the Alzheimer's disease (AD). Methods: To identify early cognitive signs of AD conversion, some studies focused on amnestic-Mild Cognitive Impairment (aMCI) by reporting alterations in both reference frames, especially the allocentric ones. However, spatial environments in which we move need the cooperation of both reference frames. Such cooperating processes imply that we constantly switch from allocentric to egocentric frames and vice versa. This raises the question of whether alterations of switching abilities might also characterize an early cognitive marker of AD, potentially suitable to detect the conversion from aMCI to dementia. Here, we compared AD and aMCI patients with Normal Controls (NC) on the Ego-Allo- Switching spatial memory task. The task assessed the capacity to use switching (Ego-Allo, Allo-Ego) and non-switching (Ego-Ego, Allo-Allo) verbal judgments about relative distances between memorized stimuli. Results: The novel finding of this study is the neat impairment shown by aMCI and AD in switching from allocentric to egocentric reference frames. Interestingly, in aMCI when the first reference frame was egocentric, the allocentric deficit appeared attenuated. Conclusion: This led us to conclude that allocentric deficits are not always clinically detectable in aMCI since the impairments could be masked when the first reference frame was body-centred. Alongside, AD and aMCI also revealed allocentric deficits in the non-switching condition. These findings suggest that switching alterations would emerge from impairments in hippocampal and posteromedial areas and from concurrent dysregulations in the locus coeruleus-noradrenaline system or pre-frontal cortex.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


Sign in / Sign up

Export Citation Format

Share Document