scholarly journals Causal inference for spatial constancy across whole-body motion

2018 ◽  
Author(s):  
Florian Perdreau ◽  
James Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

AbstractThe brain can estimate the amplitude and direction of self-motion by integrating multiple sources of sensory information, and use this estimate to update object positions in order to provide us with a stable representation of the world. A strategy to improve the precision of the object position estimate would be to integrate this internal estimate and the sensory feedback about the object position based on their reliabilities. Integrating these cues, however, would only be optimal under the assumption that the object has not moved in the world during the intervening body displacement. Therefore, the brain would have to infer whether the internal estimate and the feedback relate to a same external position (stable object), and integrate and/or segregate these cues based on this inference – a process that can be modeled as Bayesian Causal inference. To test this hypothesis, we designed a spatial updating task across passive whole body translation in complete darkness, in which participants (n=11), seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, a second target (feedback) was briefly flashed around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally-updated target position and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.Author SummaryA change of an object’s position on our retina can be caused by a change of the object’s location in the world or by a movement of the eye and body. Here, we examine how the brain solves this problem for spatial updating by assessing the probability that the internally-updated location during body motion and observed retinal feedback after the motion stems from the same object location in the world. Guided by Bayesian causal inference model, we demonstrate that participants’ errrors in spatial updating depend nonlinearly on the spatial discrepancy between internally-updated and reafferent visual feedback about the object’s location in the world. We propose that the brain implicitly represents the probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.

2019 ◽  
Vol 121 (1) ◽  
pp. 269-284 ◽  
Author(s):  
Florian Perdreau ◽  
James R. H. Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


2017 ◽  
Vol 118 (4) ◽  
pp. 2499-2506 ◽  
Author(s):  
A. Pomante ◽  
L. P. J. Selen ◽  
W. P. Medendorp

The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical—as a proxy for the tilt percept—during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s2peak acceleration, 80 cm displacement). While subjects ( n=10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model’s prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical.NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion.


2004 ◽  
Vol 91 (4) ◽  
pp. 1608-1619 ◽  
Author(s):  
Robert L. White ◽  
Lawrence H. Snyder

Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.


2019 ◽  
Vol 121 (6) ◽  
pp. 2392-2400 ◽  
Author(s):  
Romy S. Bakker ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. NEW & NOTEWORTHY The brain takes inertial acceleration into account in computing the anticipated biomechanical costs that guide hand selection during whole body motion. Whereas these costs are defined in a body-centered, muscle-based reference frame, the otoliths detect the inertial acceleration in head-centered coordinates. By systematically manipulating head position relative to the body, we show that the brain transforms otolith signals into body-centered coordinates at an early stage of reach planning, i.e., before the decision of hand choice is computed.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


2007 ◽  
Vol 19 (9) ◽  
pp. 2353-2386 ◽  
Author(s):  
Carlos R. Cassanello ◽  
Vincent P. Ferrera

Saccadic eye movements remain spatially accurate even when the target becomes invisible and the initial eye position is perturbed. The brain accomplishes this in part by remapping the remembered target location in retinal coordinates. The computation that underlies this visual remapping is approximated by vector subtraction: the original saccade vector is updated by subtracting the vector corresponding to the intervening eye movement. The neural mechanism by which vector subtraction is implemented is not fully understood. Here, we investigate vector subtraction within a framework in which eye position and retinal target position signals interact multiplicatively (gain field). When the eyes move, they induce a spatial modulation of the firing rates across a retinotopic map of neurons. The updated saccade metric can be read from the shift of the peak of the population activity across the map. This model uses a quasi-linear (half-rectified) dependence on the eye position and requires the slope of the eye position input to be negatively proportional to the preferred retinal position of each neuron. We derive analytically this constraint and study its range of validity. We discuss how this mechanism relates to experimental results reported in the frontal eye fields of macaque monkeys.


2007 ◽  
Vol 98 (1) ◽  
pp. 537-541 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

As we move our bodies in space, we often undergo head and body rotations about different axes—yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.


2020 ◽  
Author(s):  
Björn Brembs

Nervous systems are typically described as static networks passively responding to external stimuli (i.e., the ‘sensorimotor hypothesis’). However, for more than a century now, evidence has been accumulating that this passive-static perspective is wrong. Instead, evidence suggests that nervous systems dynamically change their connectivity and actively generate behavior so their owners can achieve goals in the world, some of which involve controlling their sensory feedback. This review provides a brief overview of the different historical perspectives on general brain function and details some select modern examples falsifying the sensorimotor hypothesis.


2016 ◽  
Vol 9 (1) ◽  
pp. 27-35 ◽  
Author(s):  
Adi Shaked ◽  
Gerald L. Clore

In their cognitive theory of emotion, Schachter and Singer proposed that feelings are separable from what they are about. As a test, they induced feelings of arousal by injecting epinephrine and then molded them into different emotions. They illuminated how feelings in one moment lead into the next to form a stream of conscious experience. We examine the construction of emotion in a similar spirit. We use the sensory integration process to understand how the brain combines disparate sources of information to construct both perceptual and emotional models of the world even as the world continues to change. We emphasize two processes: affect segmentation (isolating the felt component of an emotion) and affect integration (recombining this feeling with its object).


Sign in / Sign up

Export Citation Format

Share Document