scholarly journals Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Francesco Lacquaniti ◽  
Gianfranco Bosco ◽  
Silvio Gravano ◽  
Iole Indovina ◽  
Barbara La Scaleia ◽  
...  

Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.

2007 ◽  
Vol 97 (1) ◽  
pp. 604-617 ◽  
Author(s):  
Eliana M. Klier ◽  
Hongying Wang ◽  
J. Douglas Crawford

Two central, related questions in motor control are 1) how the brain represents movement directions of various effectors like the eyes and head and 2) how it constrains their redundant degrees of freedom. The interstitial nucleus of Cajal (INC) integrates velocity commands from the gaze control system into position signals for three-dimensional eye and head posture. It has been shown that the right INC encodes clockwise (CW)-up and CW-down eye and head components, whereas the left INC encodes counterclockwise (CCW)-up and CCW-down components, similar to the sensitivity directions of the vertical semicircular canals. For the eyes, these canal-like coordinates align with Listing’s plane (a behavioral strategy limiting torsion about the gaze axis). By analogy, we predicted that the INC also encodes head orientation in canal-like coordinates, but instead, aligned with the coordinate axes for the Fick strategy (which constrains head torsion). Unilateral stimulation (50 μA, 300 Hz, 200 ms) evoked CW head rotations from the right INC and CCW rotations from the left INC, with variable vertical components. The observed axes of head rotation were consistent with a canal-like coordinate system. Moreover, as predicted, these axes remained fixed in the head, rotating with initial head orientation like the horizontal and torsional axes of a Fick coordinate system. This suggests that the head is ordinarily constrained to zero torsion in Fick coordinates by equally activating CW/CCW populations of neurons in the right/left INC. These data support a simple mechanism for controlling head orientation through the alignment of brain stem neural coordinates with natural behavioral constraints.


Development ◽  
1996 ◽  
Vol 123 (1) ◽  
pp. 241-254 ◽  
Author(s):  
T.T. Whitfield ◽  
M. Granato ◽  
F.J. van Eeden ◽  
U. Schach ◽  
M. Brand ◽  
...  

Mutations giving rise to anatomical defects in the inner ear have been isolated in a large scale screen for mutations causing visible abnormalities in the zebrafish embryo (Haffter, P., Granato, M., Brand, M. et al. (1996) Development 123, 1–36). 58 mutants have been classified as having a primary ear phenotype; these fall into several phenotypic classes, affecting presence or size of the otoliths, size and shape of the otic vesicle and formation of the semicircular canals, and define at least 20 complementation groups. Mutations in seven genes cause loss of one or both otoliths, but do not appear to affect development of other structures within the ear. Mutations in seven genes affect morphology and patterning of the inner ear epithelium, including formation of the semicircular canals and, in some, development of sensory patches (maculae and cristae). Within this class, dog-eared mutants show abnormal development of semicircular canals and lack cristae within the ear, while in van gogh, semicircular canals fail to form altogether, resulting in a tiny otic vesicle containing a single sensory patch. Both these mutants show defects in the expression of homeobox genes within the otic vesicle. In a further class of mutants, ear size is affected while patterning appears to be relatively normal; mutations in three genes cause expansion of the otic vesicle, while in little ears and microtic, the ear is abnormally small, but still contains all five sensory patches, as in the wild type. Many of the ear and otolith mutants show an expected behavioural phenotype: embryos fail to balance correctly, and may swim on their sides, upside down, or in circles. Several mutants with similar balance defects have also been isolated that have no obvious structural ear defect, but that may include mutants with vestibular dysfunction of the inner ear (Granato, M., van Eeden, F. J. M., Schach, U. et al. (1996) Development, 123, 399–413,). Mutations in 19 genes causing primary defects in other structures also show an ear defect. In particular, ear phenotypes are often found in conjunction with defects of neural crest derivatives (pigment cells and/or cartilaginous elements of the jaw). At least one mutant, dog-eared, shows defects in both the ear and another placodally derived sensory system, the lateral line, while hypersensitive mutants have additional trunk lateral line organs.


2015 ◽  
Vol 28 (5-6) ◽  
pp. 507-524 ◽  
Author(s):  
Barry M. Seemungal

Vestibular cognition can be divided into two main functions — a primary vestibular sensation of self-motion and a derived sensation of spatial orientation. Although the vestibular system requires calibration from other senses for optimal functioning, both vestibular spatial and vestibular motion perception are typically employed when navigating without vision. A recent important finding is the cerebellar mediation of the uncoupling of reflex (i.e., the vestibular-ocular reflex) from vestibular motion perception (Perceptuo-Reflex Uncoupling). The brain regions that mediate vestibular motion and vestibular spatial perception is an area of on-going research activity. However, there is data to support the notion that vestibular motion perception is mediated by multiple brain regions. In contrast, vestibular spatial perception appears to be mediated by posterior brain areas although currently the exact locus is unclear. I will discuss the experimental evidence that support this functional dichotomy in vestibular cognition (i.e., motion processingvs.spatial orientation). Along the way I will highlight relevant practical technical tips in testing vestibular cognition.


eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Karolina Marciniak ◽  
Artin Atabaki ◽  
Peter W Dicke ◽  
Peter Thier

Primates use gaze cues to follow peer gaze to an object of joint attention. Gaze following of monkeys is largely determined by head or face orientation. We used fMRI in rhesus monkeys to identify brain regions underlying head gaze following and to assess their relationship to the ‘face patch’ system, the latter being the likely source of information on face orientation. We trained monkeys to locate targets by either following head gaze or using a learned association of face identity with the same targets. Head gaze following activated a distinct region in the posterior STS, close to-albeit not overlapping with-the medial face patch delineated by passive viewing of faces. This ‘gaze following patch’ may be the substrate of the geometrical calculations needed to translate information on head orientation from the face patches into precise shifts of attention, taking the spatial relationship of the two interacting agents into account.


2021 ◽  
Author(s):  
David Acunzo ◽  
David Melcher

Visual processing mainly occurs during fixation, periods separated by saccadic eye movements, necessitating a close coordination between sensory and motor systems. It has been suggested that the intention to make a saccade can modulate neural activity, including predictive changes, suppression of peri-saccadic retinal input and trans-saccadic integration. Consistent with this idea, modulations of neural activity around the time of saccades have been reported in non-human species, showing non-visually mediated, extraretinal responses in specific brain regions. In humans, however, peri-saccadic whole-brain activity has mainly been studied in the context of a perceptual task, making it difficult to disentangle activity related to the task, visual transients from retinal stimulation and non-visual (saccade-related) responses. We measured magnetoencephalography (MEG) theta (3–7 Hz) and alpha (8–12 Hz) activity during voluntary horizontal saccade execution between two fixation points. To distinguish between visually and non-visually mediated activity, participants engaged in three tasks: voluntary saccades in near-darkness, fixation with visual input shifted to simulate the saccade, and volitional saccades in total darkness. Using correlational analyses, we found that patterns of neural activity are consistent with contributions of two separate mechanisms, one related to saccades (non-visual/extraretinal) and the other linked to the processing of visual input at the beginning of the new fixation (visual/retinal). Changes in occipital alpha power and instantaneous frequency showed a similar time course in near-dark and simulated saccade conditions, suggesting an effect of visually evoked responses. In contrast, alterations in parietal-occipital theta power and phase clustering were consistent with a non-visually-driven (extraretinal) mechanism, with similar multivariate patterns for near-dark and full-darkness conditions. Some effects, such as theta phase reset and alterations in alpha power, showed separable contributions of both the saccade and visual transient, with differing time courses. This combination of visual and non-visual mechanisms may support sensorimotor integration during active vision.


2004 ◽  
Vol 92 (2) ◽  
pp. 905-925 ◽  
Author(s):  
Andrea M. Green ◽  
Dora E. Angelaki

The ability to navigate in the world and execute appropriate behavioral responses depends critically on the contribution of the vestibular system to the detection of motion and spatial orientation. A complicating factor is that otolith afferents equivalently encode inertial and gravitational accelerations. Recent studies have demonstrated that the brain can resolve this sensory ambiguity by combining signals from both the otoliths and semicircular canal sensors, although it remains unknown how the brain integrates these sensory contributions to perform the nonlinear vector computations required to accurately detect head movement in space. Here, we illustrate how a physiologically relevant, nonlinear integrative neural network could be used to perform the required computations for inertial motion detection along the interaural head axis. The proposed model not only can simulate recent behavioral observations, including a translational vestibuloocular reflex driven by the semicircular canals, but also accounts for several previously unexplained characteristics of central neural responses such as complex otolith–canal convergence patterns and the prevalence of dynamically processed otolith signals. A key model prediction, implied by the required computations for tilt–translation discrimination, is a coordinate transformation of canal signals from a head-fixed to a spatial reference frame. As a result, cell responses may reflect canal signal contributions that cannot be easily detected or distinguished from otolith signals. New experimental protocols are proposed to characterize these cells and identify their contributions to spatial motion estimation. The proposed theoretical framework makes an essential first link between the computations for inertial acceleration detection derived from the physical laws of motion and the neural response properties predicted in a physiologically realistic network implementation.


2000 ◽  
Vol 84 (4) ◽  
pp. 2001-2015 ◽  
Author(s):  
L. H. Zupan ◽  
R. J. Peterka ◽  
D. M. Merfeld

Sensory systems often provide ambiguous information. Integration of various sensory cues is required for the CNS to resolve sensory ambiguity and elicit appropriate responses. The vestibular system includes two types of sensors: the semicircular canals, which measure head rotation, and the otolith organs, which measure gravito-inertial force (GIF), the sum of gravitational force and inertial force due to linear acceleration. According to Einstein's equivalence principle, gravitational force is indistinguishable from inertial force due to linear acceleration. As a consequence, otolith measurements must be supplemented with other sensory information for the CNS to distinguish tilt from translation. The GIF resolution hypothesis states that the CNS estimates gravity and linear acceleration, so that the difference between estimates of gravity and linear acceleration matches the measured GIF. Both otolith and semicircular canal cues influence this estimation of gravity and linear acceleration. The GIF resolution hypothesis predicts that inaccurate estimates of both gravity and linear acceleration can occur due to central interactions of sensory cues. The existence of specific patterns of vestibuloocular reflexes (VOR) related to these inaccurate estimates can be used to test the GIF resolution hypothesis. To investigate this hypothesis, we measured eye movements during two different protocols. In one experiment, eight subjects were rotated at a constant velocity about an earth-vertical axis and then tilted 90° in darkness to one of eight different evenly spaced final orientations, a so-called “dumping” protocol. Three speeds (200, 100, and 50°/s) and two directions, clockwise (CW) and counterclockwise (CCW), of rotation were tested. In another experiment, four subjects were rotated at a constant velocity (200°/s, CW and CCW) about an earth-horizontal axis and stopped in two different final orientations (nose-up and nose-down), a so-called “barbecue” protocol. The GIF resolution hypothesis predicts that post-rotatory horizontal VOR eye movements for both protocols should include an “induced” VOR component, compensatory to an interaural estimate of linear acceleration, even though no true interaural linear acceleration is present. The GIF resolution hypothesis accurately predicted VOR and induced VOR dependence on rotation direction, rotation speed, and head orientation. Alternative hypotheses stating that frequency segregation may discriminate tilt from translation or that the post-rotatory VOR time constant is dependent on head orientation with respect to the GIF direction did not predict the observed VOR for either experimental protocol.


Author(s):  
Chao-Tsung Hsiao ◽  
Jingsen Ma ◽  
Georges L. Chahine

The effects of gravity on a phase separator are studied numerically using an Eulerian/Lagrangian two-phase flow approach. The separator utilizes high intensity swirl to separate bubbles from the liquid. The two-phase flow enters tangentially a cylindrical swirl chamber and rotate around the cylinder axis. On earth, as the bubbles are captured by the vortex formed inside the swirl chamber due to the centripetal force, they also experience the buoyancy force due to gravity. In a reduced or zero gravity environment buoyancy is reduced or inexistent and capture of the bubbles by the vortex is modified. The present numerical simulations enable study of the relative importance of the acceleration of gravity on the bubble capture by the swirl flow in the separator. In absence of gravity, the bubbles get stratified depending on their sizes, with the larger bubbles entering the core region earlier than the smaller ones. However in presence of gravity, stratification is more complex as the two acceleration fields — due to gravity and to rotation — compete or combine during the bubble capture.


2018 ◽  
Author(s):  
G. Blohm ◽  
H. Alikhanian ◽  
W. Gaetz ◽  
H.C. Goltz ◽  
J.F.X. DeSouza ◽  
...  

AbstractMovement planning involves transforming the sensory goal representation into a command in motor coordinates. Surprisingly, the real-time dynamics of sensorimotor transformations at the whole brain level remain unknown, in part due to the spatiotemporal limitations of fMRI and neurophysiological recordings. Here, we used magnetoencephalography (MEG) during pro-/anti-wrist pointing to determine (1) the cortical areas involved in transforming visual signals into appropriate hand motor commands, and (2) how this transformation occurs in real time, both within and across the regions involved. We computed sensory, motor, and sensorimotor indices in 16 bilateral brain regions for direction coding based on hemispherically lateralized de/synchronization in the α (7-15Hz) and β (15-35Hz) bands. We found a visuomotor progression, from pure sensory codes in ‘early’ occipital-parietal areas, to a temporal transition from sensory to motor coding in the majority of parietal-frontal sensorimotor areas, to a pure motor code, in both the α and β bands. Further, the timing of these transformations revealed a top-down pro/anti cue influence that propagated ‘backwards’ from frontal through posterior cortical areas. These data directly demonstrate a progressive, real-time transformation both within and across the entire occipital-parietal-frontal network that follows specific rules of spatial distribution and temporal order.


Author(s):  
Valeria C Caruso ◽  
Daniel S Pages ◽  
Marc A. Sommer ◽  
Jennifer M Groh

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually-guided saccades from variable initial fixation locations, and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become predominantly eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.


Sign in / Sign up

Export Citation Format

Share Document