scholarly journals The A-Effect and Global Motion

Vision ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 13
Author(s):  
Pearl Guterman ◽  
Robert Allison

When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.

2004 ◽  
Vol 14 (5) ◽  
pp. 375-385 ◽  
Author(s):  
E.L. Groen ◽  
W. Bles

We examined to what extent body tilt may augment the perception of visually simulated linear self acceleration. Fourteen subjects judged visual motion profiles of fore-aft motion at four different frequencies between 0.04âĂŞ0.33 Hz, and at three different acceleration amplitudes (0.44, 0.88 and 1.76 m / s 2 ). Simultaneously, subjects were tilted backward and forward about their pitch axis. The amplitude of pitch tilt was systematically varied. Using a two-alternative-forced-choice paradigm, psychometric curves were calculated in order to determine: 1) the minimum tilt amplitude required to generate a linear self-motion percept in more than 50% of the cases, and 2) the maximum tilt amplitude at which rotation remains sub-threshold in more than 50% of the cases. The results showed that the simulation of linear self motion became more realistic with the application of whole body tilt, as long as the tilt rate remained under the detection threshold of about 3 deg/s. This value is in close agreement with the empirical rate limit commonly used in flight simulation. The minimum required motion cue was inversely proportional to stimulus frequency, and increased with the amplitude of the visual displacement (rather than acceleration). As a consequence, the range of useful tilt stimuli became more critical with increasing stimulus frequency. We conclude that this psychophysical approach reveals valid parameters for motion driving algorithms used in motion base simulators.


2002 ◽  
Vol 11 (6) ◽  
pp. 349-355
Author(s):  
Ognyan I. Kolev

Purpose: To further investigate the direction of (I) nystagmus and (II) self-motion perception induced by two stimuli: (a) caloric vestibular stimulations and (b) a sudden halt during vertical axis rotation. Subjects and methods: Twelve normal humans received caloric stimulation at 44°C, 30°C, and 20°C while in a supine position with the head inclined 30° upwards. In a second test they were rotated around the vertical axis with the head randomly placed in two positions: tilted 30° forward or tilted 60° backward, at a constant velocity of 90°/sec for 2 minutes and then suddenly stopped. After both tests they were asked to describe their sensations of self-motion. Eye movements were recorded with an infrared video-technique. Results: Caloric stimulation evoked only horizontal nystagmus in all subjects and induced a non-uniform complex perception of angular in frontal and transverse planes (the former dominated) and linear movements along the antero-posterior axis (sinking dominated) of the subject's coordinates. The self-motion was felt with the whole body or with a part of the body. Generally the perception evoked by cold (30°C) and warm (44°C) calorics was similar, although there were some differences. The stronger stimulus (20°C) evoked not only quantitative but also qualitative differences in perception. The abrupt halt of rotation induced self-motion perception and nystagmus only in the plane of rotation. The self-motion was felt with the whole body. Conclusion: There was no difference in the nystagmus evoked by caloric stimulation and a sudden halt of vertical axis rotation (in head positions to stimulate the horizontal canals); however, the two stimuli evoked different perceptions of self-motion. Calorics provoked the sensation of self-rotation in the frontal plane and linear motion, which did not correspond to the direction of nystagmus, as well as arcing and a reset phenomenon during angular and linear self-motion, caloric-induced self-motion can be felt predominantly or only with a part of the body, depending on the self-motion intensity. The findings indicate that, unlike the self-motion induced by sudden halt of vertical axis rotation, several mechanisms take part in generating caloric-induced self-motion.


2019 ◽  
Vol 32 (3) ◽  
pp. 165-178 ◽  
Author(s):  
Mathieu Koppen ◽  
Arjan C. ter Horst ◽  
W. Pieter Medendorp

Abstract When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance.


2017 ◽  
Vol 117 (5) ◽  
pp. 2037-2052 ◽  
Author(s):  
Koeun Lim ◽  
Faisal Karmali ◽  
Keyvan Nicoucar ◽  
Daniel M. Merfeld

When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model.


Cephalalgia ◽  
2006 ◽  
Vol 26 (8) ◽  
pp. 949-959 ◽  
Author(s):  
AM McKendrick ◽  
A Turpin ◽  
S Webb ◽  
DR Badcock

Some migraineurs have increased thresholds for the detection of global dot motion. We investigated whether migraineurs show consequential abnormalities in the determination of direction of self-motion (heading) from simulated optic flow. The ability to determine heading from optic flow is likely to be necessary for optimal determination of self-motion through the environment. Twenty-five migraineurs and 25 controls participated. Global dot motion coherence thresholds were assessed, in addition to performance on two simulated heading tasks: one with a symmetrical flow field, and the second with differing velocity of optic flow on the left and right sides of the participant. While some migraineurs demonstrated abnormal global motion coherence thresholds, there was no difference in performance on the heading tasks at either simulated walking (5 km/h) or driving (50 km/h) speeds. Increased global motion coherence thresholds in migraineurs do not result in abnormal judgements of heading from 100± coherent optic flow.


2021 ◽  
Author(s):  
Omid A Zobeiri ◽  
Kathleen E Cullen

The ability to accurately control our posture and perceive spatial orientation during self-motion requires knowledge of the motion of both the head and body. However, whereas the vestibular sensors and nuclei directly encode head motion, no sensors directly encode body motion. Instead, the integration of vestibular and neck proprioceptive inputs is necessary to transform vestibular information into the body-centric reference frame required for postural control. The anterior vermis of the cerebellum is thought to play a key role in this transformation, yet how its Purkinje cells integrate these inputs or what information they dynamically encode during self-motion remains unknown. Here we recorded the activity of individual anterior vermis Purkinje cells in alert monkeys during passively applied whole-body, body-under-head, and head-on-body rotations. Most neurons dynamically encoded an intermediate representation of self-motion between head and body motion. Notably, these neurons responded to both vestibular and neck proprioceptive stimulation and showed considerable heterogeneity in their response dynamics. Furthermore, their vestibular responses demonstrated tuning in response to changes in head-on-body position. In contrast, a small remaining percentage of neurons sensitive only to vestibular stimulation unambiguously encoded head-in-space motion across conditions. Using a simple population model, we establish that combining responses from 40 Purkinje cells can explain the responses of their target neurons in deep cerebellar nuclei across all self-motion conditions. We propose that the observed heterogeneity in Purkinje cells underlies the cerebellum's capacity to compute the dynamic representation of body motion required to ensure accurate postural control and perceptual stability in our daily lives.


i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


2014 ◽  
Vol 1 (3) ◽  
pp. 140185 ◽  
Author(s):  
Ludwig Wallmeier ◽  
Lutz Wiegrebe

The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 739-761 ◽  
Author(s):  
Ramy Kirollos ◽  
Robert S. Allison ◽  
Stephen Palmisano

Behavioural studies have consistently found stronger vection responses for oscillating, compared to smooth/constant, patterns of radial flow (the simulated viewpoint oscillation advantage for vection). Traditional accounts predict that simulated viewpoint oscillation should impair vection by increasing visual–vestibular conflicts in stationary observers (as this visual oscillation simulates self-accelerations that should strongly stimulate the vestibular apparatus). However, support for increased vestibular activity during accelerating vection has been mixed in the brain imaging literature. This fMRI study examined BOLD activity in visual (cingulate sulcus visual area — CSv; medial temporal complex — MT+; V6; precuneus motion area — PcM) and vestibular regions (parieto-insular vestibular cortex — PIVC/posterior insular cortex — PIC; ventral intraparietal region — VIP) when stationary observers were exposed to vection-inducing optic flow (i.e., globally coherent oscillating and smooth self-motion displays) as well as two suitable control displays. In line with earlier studies in which no vection occurred, CSv and PIVC/PIC both showed significantly increased BOLD activity during oscillating global motion compared to the other motion conditions (although this effect was found for fewer subjects in PIVC/PIC). The increase in BOLD activity in PIVC/PIC during prolonged exposure to the oscillating (compared to smooth) patterns of global optical flow appears consistent with vestibular facilitation.


Sign in / Sign up

Export Citation Format

Share Document