scholarly journals Rotational Self-motion Cues Improve Spatial Learning when Teleporting in Virtual Environments

Author(s):  
Alex F. Lim ◽  
Jonathan W. Kelly ◽  
Nathan C. Sepich ◽  
Lucia A. Cherep ◽  
Grace C. Freed ◽  
...  
2008 ◽  
Vol 17 (1) ◽  
pp. 43-56 ◽  
Author(s):  
Aleksander Väljamäe ◽  
Pontus Larsson ◽  
Daniel Västfjäll ◽  
Mendel Kleiner

Sound is an important, but often neglected, component for creating a self-motion illusion (vection) in Virtual Reality applications, for example, motion simulators. Apart from auditory motion cues, sound can provide contextual information representing self-motion in a virtual environment. In two experiments we investigated the benefits of hearing an engine sound when presenting auditory (Experiment 1) or auditory-vibrotactile (Experiment 2) virtual environments inducing linear vection. The addition of the engine sound to the auditory scene significantly enhanced subjective ratings of vection intensity in Experiment 1 and vection onset times but not subjective ratings in Experiment 2. Further analysis using individual imagery vividness scores showed that this disparity between vection measures was created by participants with higher kinesthetic imagery. On the other hand, for participants with lower kinesthetic imagery scores, the engine sound enhanced vection sensation in both experiments. A high correlation with participants' kinesthetic imagery vividness scores suggests the influence of a first person perspective in the perception of the engine sound. We hypothesize that self-motion sounds (e.g., the sound of footsteps, engine sound) represent a specific type of acoustic body-centered feedback in virtual environments. Therefore, the results may contribute to a better understanding of the role of self-representation sounds (sonic self-avatars), in virtual and augmented environments.


2019 ◽  
Author(s):  
Lucia Cherep ◽  
Alex Lim ◽  
Jonathan Kelly ◽  
Alec Ostrander ◽  
Stephen B. Gilbert

Teleporting is a popular interface to allow virtual reality users to explore environments that are larger than the available walking space. When teleporting, the user positions a marker in the virtual environment and is instantly transported without any self-motion cues. Five experiments were designed to evaluate the spatial cognitive consequences of teleporting, and to identify environmental cues that could mitigate those costs. Participants performed a triangle completion task by traversing two outbound path legs before pointing to the unmarked path origin. Locomotion was accomplished via walking or two common implementations of the teleporting interface distinguished by the concordance between movement of the body and movement through the virtual environment. In the partially concordant teleporting interface, participants teleported to translate (change position) but turned the body to rotate. In the discordant teleporting interface, participants teleported to translate and rotate. Across all 5 experiments, discordant teleporting produced larger errors than partially concordant teleporting which produced larger errors than walking, reflecting the importance of translational and rotational self-motion cues. Furthermore, geometric boundaries (room walls or a fence) were necessary to mitigate the spatial cognitive costs associated with teleporting, and landmarks were helpful only in the context of a geometric boundary.


2022 ◽  
pp. 1-29
Author(s):  
Andrew R. Wagner ◽  
Megan J. Kobel ◽  
Daniel M. Merfeld

Abstract In an effort to characterize the factors influencing the perception of self-motion rotational cues, vestibular self-motion perceptual thresholds were measured in 14 subjects for rotations in the roll and pitch planes, as well as in the planes aligned with the anatomic orientation of the vertical semicircular canals (i.e., left anterior, right posterior; LARP, and right anterior, left posterior; RALP). To determine the multisensory influence of concurrent otolith cues, within each plane of motion, thresholds were measured at four discrete frequencies for rotations about earth-horizontal (i.e., tilts; EH) and earth-vertical axes (i.e., head positioned in the plane of the rotation; EV). We found that the perception of rotations, stimulating primarily the vertical canals, was consistent with the behavior of a high-pass filter for all planes of motion, with velocity thresholds increasing at lower frequencies of rotation. In contrast, tilt (i.e, EH rotation) velocity thresholds, stimulating both the canals and otoliths (i.e., multisensory integration), decreased at lower frequencies and were significantly lower than earth-vertical rotation thresholds at each frequency below 2 Hz. These data suggest that multisensory integration of otolithic gravity cues with semicircular canal rotation cues enhances perceptual precision for tilt motions at frequencies below 2 Hz. We also showed that rotation thresholds, at least partially, were dependent on the orientation of the rotation plane relative to the anatomical alignment of the vertical canals. Collectively these data provide the first comprehensive report of how frequency and axis of rotation influence perception of rotational self-motion cues stimulating the vertical canals.


2001 ◽  
Vol 86 (2) ◽  
pp. 692-702 ◽  
Author(s):  
Michaël B. Zugaro ◽  
Eiichi Tabuchi ◽  
Céline Fouquier ◽  
Alain Berthoz ◽  
Sidney I. Wiener

Head direction (HD) cells discharge selectively in macaques, rats, and mice when they orient their head in a specific (“preferred”) direction. Preferred directions are influenced by visual cues as well as idiothetic self-motion cues derived from vestibular, proprioceptive, motor efferent copy, and command signals. To distinguish the relative importance of active locomotor signals, we compared HD cell response properties in 49 anterodorsal thalamic HD cells of six male Long-Evans rats during active displacements in a foraging task as well as during passive rotations. Since thalamic HD cells typically stop firing if the animals are tightly restrained, the rats were trained to remain immobile while drinking water distributed at intervals from a small reservoir at the center of a rotatable platform. The platform was rotated in a clockwise/counterclockwise oscillation to record directional responses in the stationary animals while the surrounding environmental cues remained stable. The peak rate of directional firing decreased by 27% on average during passive rotations ( r 2 = 0.73, P< 0.001). Individual cells recorded in sequential sessions ( n = 8) reliably showed comparable reductions in peak firing, but simultaneously recorded cells did not necessarily produce identical responses. All of the HD cells maintained the same preferred directions during passive rotations. These results are consistent with the hypothesis that the level of locomotor activity provides a state-dependent modulation of the response magnitude of AD HD cells. This could result from diffusely projecting neuromodulatory systems associated with motor state.


Author(s):  
Lawrence Hettinger ◽  
Tarah Schmidt-Daly ◽  
David Jones ◽  
Behrang Keshavarz

Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


2021 ◽  
Vol 79 (1) ◽  
pp. 95-116
Author(s):  
Cosimo Tuena ◽  
Valentina Mancuso ◽  
Chiara Stramba-Badiale ◽  
Elisa Pedroli ◽  
Marco Stramba-Badiale ◽  
...  

Background: Spatial navigation is the ability to estimate one’s position on the basis of environmental and self-motion cues. Spatial memory is the cognitive substrate underlying navigation and relies on two different reference frames: egocentric and allocentric. These spatial frames are prone to decline with aging and impairment is even more pronounced in Alzheimer’s disease (AD) or in mild cognitive impairment (MCI). Objective: To conduct a systematic review of experimental studies investigating which MCI population and tasks are used to evaluate spatial memory and how allocentric and egocentric deficits are impaired in MCI after navigation. Methods: PRISMA and PICO guidelines were applied to carry out the systematic search. Down and Black checklist was used to assess methodological quality. Results: Our results showed that amnestic MCI and AD pathology are the most investigated typologies; both egocentric and allocentric memory are impaired in MCI individuals, and MCI due to AD biomarkers has specific encoding and retrieval impairments; secondly, spatial navigation is principally investigated with the hidden goal task (virtual and real-world version), and among studies involving virtual reality, the privileged setting consists of non-immersive technology; thirdly, despite subtle differences, real-world and virtual versions showed good overlap for the assessment of MCI spatial memory. Conclusion: Considering that MCI is a subclinical entity with potential risk for conversion to dementia, investigating spatial memory deficits with navigation tasks might be crucial to make accurate diagnosis and rehabilitation.


1992 ◽  
Vol 1 (3) ◽  
pp. 306-310 ◽  
Author(s):  
Lawrence J. Hettinger ◽  
Gary E. Riccio

Visually induced motion sickness is a syndrome that occasionally occurs when physically stationary individuals view compelling visual representations of self-motion. It may also occur when detectable lags are present between head movements and recomputation and presentation of the visual display in helmet-mounted displays. The occurrence of this malady is a critical issue for the future development and implementation of virtual environments. Applications of this emerging technology are likely to be compromised to the extent that users experience illness and/or incapacitation. This article presents an overview of what is currently known regarding the relationship between visually specified self-motion in the absence of inertial displacement and resulting illness and perceptual-motor disturbances.


Sign in / Sign up

Export Citation Format

Share Document