scholarly journals Elucidating Factors that Can Facilitate Veridical Spatial Perception in Immersive Virtual Environments

2008 ◽  
Vol 17 (2) ◽  
pp. 176-198 ◽  
Author(s):  
Victoria Interrante ◽  
Brian Ries ◽  
Jason Lindquist ◽  
Michael Kaeding ◽  
Lee Anderson

Ensuring veridical spatial perception in immersive virtual environments (IVEs) is an important yet elusive goal. In this paper, we present the results of two experiments that seek further insight into this problem. In the first of these experiments, initially reported in Interrante, Ries, Lindquist, and Anderson (2007), we seek to disambiguate two alternative hypotheses that could explain our recent finding (Interrante, Anderson, and Ries, 2006a) that participants appear not to significantly underestimate egocentric distances in HMD-based IVEs, relative to in the real world, in the special case that they unambiguously know, through first-hand observation, that the presented virtual environment is a high-fidelity 3D model of their concurrently occupied real environment. Specifically, we seek to determine whether people are able to make similarly veridical judgments of egocentric distances in these matched real and virtual environments because (1) they are able to use metric information gleaned from their exposure to the real environment to calibrate their judgments of sizes and distances in the matched virtual environment, or because (2) their prior exposure to the real environment enabled them to achieve a heightened sense of presence in the matched virtual environment, which leads them to act on the visual stimulus provided through the HMD as if they were interpreting it as a computer-mediated view of an actual real environment, rather than just as a computer-generated picture, with all of the uncertainties that that would imply. In our second experiment, we seek to investigate the extent to which augmenting a virtual environment model with faithfully-modeled replicas of familiar objects might enhance people's ability to make accurate judgments of egocentric distances in that environment.

2021 ◽  
Vol 2 ◽  
Author(s):  
Lauren Buck ◽  
Richard Paris ◽  
Bobby Bodenheimer

Spatial perception in immersive virtual environments, particularly regarding distance perception, is a well-studied topic in virtual reality literature. Distance compression, or the underestimation of distances, is and has been historically prevalent in all virtual reality systems. The problem of distance compression still remains open, but recent advancements have shown that as systems have developed, the level of distance compression has decreased. Here, we add evidence to this trend by beginning the assessment of distance compression in the HTC Vive Pro. To our knowledge, there are no archival results that report any findings about distance compression in this system. Using a familiar paradigm for studying distance compression in virtual reality hardware, we asked users to blind walk to a target object placed in a virtual environment and assessed their judgments based on those distances. We find that distance compression in the HTC Vive Pro mirrors that of the HTC Vive. Our results are not particularly surprising, considering the nature of the differences between the two systems, but they lend credence to the finding that resolution does not affect distance compression. More extensive study should be performed to reinforce these results.


2006 ◽  
Vol 18 (4) ◽  
pp. 467-475 ◽  
Author(s):  
Marcia K. O’Malley ◽  
◽  
Gina Upperman

The performance levels of human subjects in size identification and size discrimination experiments in both real and virtual environments are presented. The virtual environments are displayed with a PHANToM desktop three degree-of-freedom haptic interface. Results indicate that performance of the size identification and size discrimination tasks in the virtual environment is comparable to that in the real environment, implying that the haptic device does a good job of simulating reality for these tasks. Additionally, performance in the virtual environment was measured at below maximum machine performance levels for two machine parameters. The tabulated scores for the perception tasks in a sub-optimal virtual environment were found to be comparable to that in the real environment, supporting previous claims that haptic interface hardware may be able to convey, for these perceptual tasks, sufficient perceptual information to the user with relatively low levels of machine quality in terms of the following parameters: maximum endpoint force and maximum virtual surface stiffness. Results are comparable to those found for similar experiments conducted with other haptic interface hardware, further supporting this claim. Finally, it was found that varying maximum output force and virtual surface stiffness simultaneously does not have a compounding effect that significantly affects performance for size discrimination tasks.


Author(s):  
A.I. Zagranichny

The article presents the results of a research of different types of activity depending on the frequency of transfer of social activity from the real environment to the virtual environment and vice versa. In the course of the research the following types of activity were identified: play activity; educational activity; work; communicative activity. 214 respondents from the following cities participated in the research: Balakovo, Saratov, Moscow. They were at the age of 15 to 24 years. 52% of them were women. They had the following social statuses: "pupil", "student", "young specialist". The correlation interrelation between the specified types of activity and the frequency of transfer of social activity from one environment into another has been analyzed and interpreted. In the course of the research the following results were received: the frequency of transfer of social activity from the real environment to the virtual environment has a direct positive link with such types of activity as play activity (r=0.221; p <0.01); educational activity (r=0.228; p <0.01) and communicative activity (r=0.346; p <0.01). The frequency of transfer of social activity from the virtual environment to the real one has a direct positive link only with two types of activity: educational activity (r=0.188; p <0.05) and communicative activity (r=0.331; p <0.01).


Author(s):  
Christophe Duret

This chapter will propose an ontology of virtual environments that calls into question the dichotomy between the real and the virtual. This will draw on the concepts of trajectivity and ‘médiance' in order to describe the way virtual environments, with their technological and symbolic features, take part in the construction of human environments. This theoretical proposition will be illustrated with the analysis of Arcadia, a virtual environment built in Second Life. Finally, a mesocriticism will be proposed as a new approach for the study of virtual environments.


2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


1996 ◽  
Vol 5 (1) ◽  
pp. 122-135 ◽  
Author(s):  
Takashi Oishi ◽  
Susumu Tachi

See-through head-mounted displays (STHMDs), which superimpose the virtual environment generated by computer graphics (CG) on the real world, are expected to be able to vividly display various simulations and designs by using both the real environment and the virtual environment around us. However, we must ensure that the virtual environment is superimposed exactly on the real environment because both environments are visible. Disagreement in matching locations and size between real and virtual objects is likely to occur between the world coordinates of the real environment where the STHMD user actually exists and those of the virtual environment described as parameters of CG. This disagreement directly causes displacement of locations where virtual objects are superimposed. The STHMD must be calibrated so that the virtual environment is superimposed properly. Among the causes of such errors, we focus both on systematic errors of projection transformation parameters caused in manufacturing and differences between actual and supposed location of user's eye on STHMD when in use, and propose a calibration method to eliminate these effects. In the calibration method, the virtual cursor drawn in the virtual environment is directly fitted onto targets in the real environment. Based on the result of fitting, the least-squares method identifies values of the parameters that minimize differences between locations of the virtual cursor in the virtual environment and targets in the real environment. After we describe the calibration methods, we also report the result of this application to the STHMD that we have made. The result is accurate enough to prove the effectiveness of the calibration methods.


Robotica ◽  
2009 ◽  
Vol 28 (1) ◽  
pp. 47-56 ◽  
Author(s):  
M. Karkoub ◽  
M.-G. Her ◽  
J.-M. Chen

SUMMARYIn this paper, an interactive virtual reality motion simulator is designed and analyzed. The main components of the system include a bilateral control interface, networking, a virtual environment, and a motion simulator. The virtual reality entertainment system uses a virtual environment that enables the operator to feel the actual feedback through a haptic interface as well as the distorted motion from the virtual environment just as s/he would in the real environment. The control scheme for the simulator uses the change in velocity and acceleration that the operator imposes on the joystick, the environmental changes imposed on the motion simulator, and the haptic feedback to the operator to maneuver the simulator in the real environment. The stability of the closed-loop system is analyzed based on the Nyquist stability criteria. It is shown that the proposed design for the simulator system works well and the theoretical findings are validated experimentally.


Sign in / Sign up

Export Citation Format

Share Document