Interaction Based Redirected Walking

Author(s):  
Thereza Schmelter ◽  
Levente Hernadi ◽  
Marc Aurel Störmer ◽  
Frank Steinicke ◽  
Kristian Hildebrand

With a significant improvement in virtual reality (VR) devices, the number of interaction-based applications for consumers and industrial products is naturally increasing. As a result, many people can use VR in their homes or offices where they are limited by the physical tracking space. One way to overcome this limitation of natural walking is to use perception-inspired locomotion techniques such as redirected walking (RDW). RDW utilizes imperfections of human perception to introduce small changes like rotations or translations to steer the user away from the tracking boundaries. In this work we evaluate the detection threshold for discrete manipulation rotations to reorient the users in the scene during interactions. We show the thresholds of five most common interactions (Looking, Picking Up, Throwing, Shooting and Sword Fighting) that can be used as a distraction for RDW, which was confirmed in a user study. Based on the findings we propose a novel steer-to-action technique that helps game developers to improve VR games experiences.

2012 ◽  
Author(s):  
R. A. Grier ◽  
H. Thiruvengada ◽  
S. R. Ellis ◽  
P. Havig ◽  
K. S. Hale ◽  
...  

Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2021 ◽  
Author(s):  
◽  
James Holth

<p>Architects work within the medium of digital space on a day-to-day basis, yet never truly get to experience the spaces they are creating until after they’re built. This creates a disconnect in the design process that can lead to unexpected and unwanted results. Human perception is a powerful instrument and Virtual Reality (VR) technologies, coupled with more complex digital environments, could enable designers to take advantage of this. Through virtually inhabiting the space they are creating while they are creating it, designers can pre-visualise spatial qualities. These digital tools are experiencing a shift from technology still in development to a fully-fledged research instrument. With a growing level of technical literacy within the architectural discipline they could have the same revolutionary impact that the introduction of computers had in the late-twentieth century.  This thesis explores the potential of VR technology for processes of architectural design by assessing their combined ability to analyse a user’s perception of spatial qualities; in particular the sensation of people density within the work environment. Starting with a review of current literature in architecture and perception based science. A framework is proposed by which to assess the impacts of spatial characteristics within an Immersive Virtual Environment (IVE). This is followed by a design-led series of iterative framework developments centred on increasing user immersion within digital space. Through this methodology a greater understanding is obtained of users perceptions of spatial characteristics and of the process required to design iteratively within an IVE framework.</p>


Author(s):  
M. Doležal ◽  
M. Vlachos ◽  
M. Secci ◽  
S. Demesticha ◽  
D. Skarlatos ◽  
...  

<p><strong>Abstract.</strong> Underwater archaeological discoveries bring new challenges to the field, but such sites are more difficult to reach and, due to natural influences, they tend to deteriorate fast. Photogrammetry is one of the most powerful tools used for archaeological fieldwork. Photogrammetric techniques are used to document the state of the site in digital form for later analysis, without the risk of damaging any of the artefacts or the site itself. To achieve best possible results with the gathered data, divers should come prepared with the knowledge of measurements and photo capture methods. Archaeologists use this technology to record discovered arteacts or even the whole archaeological sites. Data gathering underwater brings several problems and limitations, so specific steps should be taken to get the best possible results, and divers should well be prepared before starting work at an underwater site. Using immersive virtual reality, we have developed an educational software to introduce maritime archaeology students to photogrammetry techniques. To test the feasibility of the software, a user study was performed and evaluated by experts. In the software, the user is tasked to put markers on the site, measure distances between them, and then take photos of the site, from which the 3D mesh is generated offline. Initial results show that the system is useful for understanding the basics of underwater photogrammetry.</p>


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


2020 ◽  
Vol 10 (11) ◽  
pp. 4049 ◽  
Author(s):  
Bruce H. Thomas

This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are within arm’s reach of the user. Examples of such virtual objects could be virtual controls such as buttons, dials and levers that the users manipulate to control the virtual reality application. This article explores the issue of a user’s ability to judge the size of an object relative to a second object of a different colour. The results determined that the points of subjective equality for height and width judgement tasks ranging from 10 to 90 mm were all within an acceptable value. That is to say, participants were able to perceive height and width judgements very close to the target values. The results for height judgement task for just-noticeable difference were all less than 1.5 mm and for the width judgement task less than 2.3 mm.


Sign in / Sign up

Export Citation Format

Share Document