Roles of Gravitational Cues and Efference Copy Signals in the Rotational Updating of Memory Saccades

2005 ◽  
Vol 94 (1) ◽  
pp. 468-478 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

Primates are able to localize a briefly flashed target despite intervening movements of the eyes, head, or body. This ability, often referred to as updating, requires extraretinal signals related to the intervening movement. With active roll rotations of the head from an upright position it has been shown that the updating mechanism is 3-dimensional, robust, and geometrically sophisticated. Here we examine whether such a rotational updating mechanism operates during passive motion both with and without inertial cues about head/body position in space. Subjects were rotated from either an upright or supine position, about a nasal–occipital axis, briefly shown a world-fixed target, rotated back to their original position, and then asked to saccade to the remembered target location. Using this paradigm, we tested subjects' abilities to update from various tilt angles (0, ±30, ±45, ±90°), to 8 target directions and 2 target eccentricities. In the upright condition, subjects accurately updated the remembered locations from all tilt angles independent of target direction or eccentricity. Slopes of directional errors versus tilt angle ranged from −0.011 to 0.15, and were significantly different from a slope of 1 (no compensation for head-in-space roll) and a slope of 0.9 (no compensation for eye-in-space roll). Because the eyes, head, and body were fixed throughout these passive movements, subjects could not use efference copies or neck proprioceptive cues to assess the amount of tilt, suggesting that vestibular signals and/or body proprioceptive cues suffice for updating. In the supine condition, where gravitational signals could not contribute, slopes ranged from 0.60 to 0.82, indicating poor updating performance. Thus information specifying the body's orientation relative to gravity is critical for maintaining spatial constancy and for distinguishing body-fixed versus world-fixed reference frames.

1997 ◽  
Vol 352 (1360) ◽  
pp. 1515-1524 ◽  
Author(s):  
J. Bures ◽  
A. A. Fenton ◽  
Yu. Kaminsky ◽  
J. Rossier ◽  
B. Sacchetti ◽  
...  

Navigation by means of cognitive maps appears to require the hippocampus; hippocampal place cells (PCs) appear to store spatial memories because their discharge is confined to cell–specific places called firing fields (FFs). Experiments with rats manipulated idiothetic and landmark–related information to understand the relationship between PC activity and spatial cognition. Rotating a circular arena in the light caused a discrepancy between these cues. This discrepancy caused most FFs to disappear in both the arena and room reference frames. However, FFs persisted in the rotating arena frame when the discrepancy was reduced by darkness or by a card in the arena. The discrepancy was increased by ’field clamping’the rat in a room–defined FF location by rotations that countered its locomotion. Most FFs dissipated and reappeared an hour or more after the clamp. Place–avoidance experiments showed that navigation uses independent idiothetic and exteroceptive memories. Rats learned to avoid the unmarked footshock region within a circular arena. When acquired on the stable arena in the light, the location of the punishment was learned by using both room and idiothetic cues; extinction in the dark transferred to the following session in the light. If, however, extinction occurred during rotation, only the arena–frame avoidance was extinguished in darkness; the room–defined location was avoided when the lights were turned back on. Idiothetic memory of room–defined avoidance was not formed during rotation in light; regardless of rotation, there was no avoidance when the lights were turned off, but room–frame avoidance reappeared when the lights were turned back on. The place–preference task rewarded visits to an allocentric target location with a randomly dispersed pellet. The resulting behaviour alternated between random pellet searching and target–directed navigation, making it possible to examine PC correlates of these two classes of spatial behaviour. The independence of idiothetic and exteroceptive spatial memories and the disruption of PC firing during rotation suggest that PCs may not be necessary for spatial cognition; this idea can be tested by recordings during the place–avoidance and preference tasks.


2008 ◽  
Vol 16 (4) ◽  
pp. 42-47 ◽  
Author(s):  
Brian P. Gorman ◽  
David Diercks ◽  
Norman Salmon ◽  
Eric Stach ◽  
Gonzalo Amador ◽  
...  

Atom probe tomography has primarily been used for atomic scale characterization of high electrical conductivity materials. A high electrical field applied to needle-shaped specimens evaporates surface atoms, and a time of flight measurement determines each atom's identity. A 2-dimensional detector determines each atom's original position on the specimen. When repeated successively over many surface monolayers, the original specimen can be reconstructed into a 3-dimensional representation. In order to have an accurate 3-D reconstruction of the original, the field required for atomic evaporation must be known a-priori. For many metallic materials, this evaporation field is well characterized, and 3-D reconstructions can be achieved with reasonable accuracy.


2004 ◽  
Vol 91 (4) ◽  
pp. 1608-1619 ◽  
Author(s):  
Robert L. White ◽  
Lawrence H. Snyder

Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


Author(s):  
Lena Mary Houlihan ◽  
David Naughton ◽  
Mark C. Preul

Surgical freedom is the most important metric at the disposal of the surgeon. The volume of surgical freedom (VSF) is a new methodology that produces an optimal qualitative and quantitative representation of an access corridor and provides the surgeon with an anatomical, spatially accurate, and clinically applicable metric. In this study, illustrative dissection examples were completed using two of the most common surgical approaches, the pterional craniotomy and the supraorbital craniotomy. The VSF methodology models the surgical corridor as a cone with an irregular base. The measurement data are fitted to the cone model, and from these fitted data, the volume of the cone is calculated as a volumetric measurement of the surgical corridor. A normalized VSF compensates for inaccurate measurements that may occur as a result of dependence on probe length during data acquisition and provides a fixed reference metric that is applicable across studies. The VSF compensates for multiple inaccuracies in the practical and mathematical methods currently used for quantitative assessment, thereby enabling the production of 3-dimensional models of the surgical corridor. The VSF is therefore an improved standard for assessment of surgical freedom.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Zheng ◽  
Jan-Gabriel Dobroschke ◽  
Stefan Pollmann

We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.


Author(s):  
Steven Charles

In order to analyze the kinematics or model the dynamics of human motion, one must be able to abstract from the intricate anatomy of the body the mechanical linkages and kinematic constraints which best approximate the joints of the body. Given the number and complexity of joints in the human body, this abstraction can be a challenging task, especially for students. While rotations about a single degree of freedom are easy to grasp, rotations about multiple DOF, which occur commonly throughout the body (e.g. shoulder, wrist, ankle, etc.) are anything but trivial. Likewise, the kinematics or dynamics of mechanical linkages such as the upper or lower limb quickly become unwieldy. To deal with these challenges, students learn to use tools from mechanics and robotics (body- and space-fixed reference frames, transformations, generalized coordinates, etc.), but these concepts can themselves be challenging and certainly take time to learn.


2007 ◽  
Vol 98 (1) ◽  
pp. 537-541 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

As we move our bodies in space, we often undergo head and body rotations about different axes—yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.


Sign in / Sign up

Export Citation Format

Share Document