scholarly journals Low-Asymmetry Interface for Multiuser VR Experiences with Both HMD and Non-HMD Users

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.

2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-17
Author(s):  
Finn Welsford-Ackroyd ◽  
Andrew Chalmers ◽  
Rafael Kuffner dos Anjos ◽  
Daniel Medeiros ◽  
Hyejin Kim ◽  
...  

In this paper, we present a system that allows a user with a head-mounted display (HMD) to communicate and collaborate with spectators outside of the headset. We evaluate its impact on task performance, immersion, and collaborative interaction. Our solution targets scenarios like live presentations or multi-user collaborative systems, where it is not convenient to develop a VR multiplayer experience and supply each user (and spectator) with an HMD. The spectator views the virtual world on a large-scale tiled video wall and is given the ability to control the orientation of their own virtual camera. This allows spectators to stay focused on the immersed user's point of view or freely look around the environment. To improve collaboration between users, we implemented a pointing system where a spectator can point at objects on the screen, which maps an indicator directly onto the objects in the virtual world. We conducted a user study to investigate the influence of rotational camera decoupling and pointing gestures in the context of HMD-immersed and non-immersed users utilizing a large-scale display. Our results indicate that camera decoupling and pointing positively impacts collaboration. A decoupled view is preferable in situations where both users need to indicate objects of interest in the scene, such as presentations and joint-task scenarios, as it requires a shared reference space. A coupled view, on the other hand, is preferable in synchronous interactions such as remote-assistant scenarios.


1995 ◽  
Vol 4 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Warren Robinett ◽  
Richard Holloway

The visual display transformation for virtual reality (VR) systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics. The process can be represented as a series of transformations, some of which contain parameters that must match the physical configuration of the system hardware and the user's body. Because of the number and complexity of the transformations, a systematic approach and a thorough understanding of the mathematical models involved are essential. This paper presents a complete model for the visual display transformation for a VR system; that is, the series of transformations used to map points from object coordinates to screen coordinates. Virtual objects are typically defined in an object-centered coordinate system (CS), but must be displayed using the screen-centered CSs of the two screens of a head-mounted display (HMD). This particular algorithm for the VR display computation allows multiple users to independently change position, orientation, and scale within the virtual world, allows users to pick up and move virtual objects, uses the measurements from a head tracker to immerse the user in the virtual world, provides an adjustable eye separation for generating two stereoscopic images, uses the off-center perspective projection required by many HMDs, and compensates for the optical distortion introduced by the lenses in an HMD. The implementation of this framework as the core of the UNC VR software is described, and the values of the UNC display parameters are given. We also introduce the vector-quaternion-scalar (VQS) representation for transformations between 3D coordinate systems, which is specifically tailored to the needs of a VR system. The transformations and CSs presented comprise a complete framework for generating the computer-graphic imagery required in a typical VR system. The model presented here is deliberately abstract in order to be general purpose; thus, issues of system design and visual perception are not addressed. While the mathematical techniques involved are already well known, there are enough parameters and pitfalls that a detailed description of the entire process should be a useful tool for someone interested in implementing a VR system.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S306-S306
Author(s):  
Steven J Baker ◽  
Jenny Waycott ◽  
Jeni Warburton ◽  
Frances Batchelor

Abstract A large body of research demonstrates the positive impact that reminiscence activities can have on older adult wellbeing. Within this space, researchers have begun to explore how virtual reality (VR) technology might be used as a reminiscence tool. The immersive characteristics of VR could aid reminiscence by giving the sense of being fully present in a virtual environment that evokes the time being explored in the reminiscence session. However, to date, research into the use of VR as a reminiscence tool has overwhelmingly focussed on static environments that can only be viewed by a single user. This paper reports on a first-of-its-kind research project that used social VR (multiple users co-present in a single virtual environment), and 3D representations of personal artifacts (such as, photographs and recorded anecdotes), to allow a group of older adults to reminisce about their school experiences. Sixteen older adults aged 70-81 participated in a four-month user study, meeting in groups with a facilitator in a social virtual world called the Highway of Life. Results demonstrate how the social experience, tailored environment, and personal artifacts that were features of the social VR environment allowed the older adults to collaboratively reminisce about their school days. We conclude by considering the benefits and challenges associated with using social VR as a reminiscence tool with older adults.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


Author(s):  
Stefan Bittmann

Virtual reality (VR) is the term used to describe representation and perception in a computer-generated, virtual environment. The term was coined by author Damien Broderick in his 1982 novel “The Judas Mandala". The term "Mixed Reality" describes the mixing of virtual reality with pure reality. The term "hyper-reality" is also used. Immersion plays a major role here. Immersion describes the embedding of the user in the virtual world. A virtual world is considered plausible if the interaction is logical in itself. This interactivity creates the illusion that what seems to be happening is actually happening. A common problem with VR is "motion sickness." To create a sense of immersion, special output devices are needed to display virtual worlds. Here, "head-mounted displays", CAVE and shutter glasses are mainly used. Input devices are needed for interaction: 3D mouse, data glove, flystick as well as the omnidirectional treadmill, with which walking in virtual space is controlled by real walking movements, play a role here.


2020 ◽  
Vol 10 (2) ◽  
pp. 486 ◽  
Author(s):  
Andrzej Burghardt ◽  
Dariusz Szybicki ◽  
Piotr Gierlak ◽  
Krzysztof Kurc ◽  
Paulina Pietruś ◽  
...  

The article presents a method of programming robots using virtual reality and digital twins. The virtual environment is a digital twin of a robotic station, built based on CAD models of existing station elements. The virtual reality system is used to record human movements in a virtual environment, which are then reproduced by a real robot. The method developed is dedicated mainly to such situations in which it is necessary for the robot to reproduce the movements of a human performing a process that is complicated from the point of view of robotization. An example of using the method for programming a robot implementing the process of cleaning ceramic casting moulds is presented.


2008 ◽  
Vol 41 (1) ◽  
pp. 161-181 ◽  
Author(s):  
Beatriz Sousa Santos ◽  
Paulo Dias ◽  
Angela Pimentel ◽  
Jan-Willem Baggerman ◽  
Carlos Ferreira ◽  
...  

1996 ◽  
Vol 5 (3) ◽  
pp. 274-289 ◽  
Author(s):  
Claudia Hendrix ◽  
Woodrow Barfield

This paper reports the results of three studies, each of which investigated the sense of presence within virtual environments as a function of visual display parameters. These factors included the presence or absence of head tracking, the presence or absence of stereoscopic cues, and the geometric field of view used to create the visual image projected on the visual display. In each study, subjects navigated a virtual environment and completed a questionnaire designed to ascertain the level of presence experienced by the participant within the virtual world. Specifically, two aspects of presence were evaluated: (1) the sense of “being there” and (2) the fidelity of the interaction between the virtual environment participant and the virtual world. Not surprisingly, the results of the first and second study indicated that the reported level of presence was significantly higher when head tracking and stereoscopic cues were provided. The results from the third study showed that the geometric field of view used to design the visual display highly influenced the reported level of presence, with more presence associated with a 50 and 90° geometric field of view when compared to a narrower 10° geometric field of view. The results also indicated a significant positive correlation between the reported level of presence and the fidelity of the interaction between the virtual environment participant and the virtual world. Finally, it was shown that the survey questions evaluating several aspects of presence produced reliable responses across questions and studies, indicating that the questionnaire is a useful tool when evaluating presence in virtual environments.


2007 ◽  
Vol 16 (6) ◽  
pp. 623-642 ◽  
Author(s):  
Marc Cavazza ◽  
Jean-Luc Lugrin ◽  
Marc Buehner

Causality is an important aspect of how we construct reality. Yet, while many psychological phenomena have been studied in their relation to virtual reality (VR), very little work has been dedicated specifically to causal perception, despite its potential relevance for user interaction and presence. In this paper, we describe the development of a virtual environment supporting experiments with causal perception. The system, inspired from psychological data, operates by intercepting events in the virtual world, so as to create artificial co-occurrences between events and their subsequent effects. After recognizing high-level events and formalizing them with a symbolic representation inspired from robotics planning, it modifies the events' effects using knowledge-based operators. The re-activation of the modified events creates co-occurrences inducing causal impressions in the user. We conducted experiments with fifty-three subjects who had to interact with virtual world objects and were presented with alternative consequences for their actions, generated by the system using various levels of plausibility. At the same time, these subjects had to answer ten items from the Presence Questionnaire corresponding mainly to control and realism factors: causal perception appears to have a positive impact on these items. The implications of this work are twofold: first, causal perception can provide an interesting experimental setting for some presence determinants, and second, the elicitation of causal impressions can become part of VR technologies to provide new forms of VR experiences.


2021 ◽  
Author(s):  
Silvia Arias ◽  
Axel Mossberg ◽  
Daniel Nilsson ◽  
Jonathan Wahlqvist

AbstractComparing results obtained in Virtual Reality to those obtained in physical experiments is key for validation of Virtual Reality as a research method in the field of Human Behavior in Fire. A series of experiments based on similar evacuation scenarios in a high-rise building with evacuation elevators was conducted. The experiments consisted of a physical experiment in a building, and two Virtual Reality experiments in a virtual representation of the same building: one using a Cave Automatic Virtual Environment (CAVE), and one using a head-mounted display (HMD). The data obtained in the HMD experiment is compared to data obtained in the CAVE and physical experiment. The three datasets were compared in terms of pre-evacuation time, noticing escape routes, walking paths, exit choice, waiting times for the elevators and eye-tracking data related to emergency signage. The HMD experiment was able to reproduce the data obtained in the physical experiment in terms of pre-evacuation time and exit choice, but there were large differences with the results from the CAVE experiment. Possible factors affecting the data produced using Virtual Reality are identified, such as spatial orientation and movement in the virtual environment.


Sign in / Sign up

Export Citation Format

Share Document