Kinesthetic Haptics Integration into Large-Scale Virtual Environments

Author(s):  
U. Kunzler ◽  
C. Runde
1997 ◽  
Vol 6 (5) ◽  
pp. 547-564 ◽  
Author(s):  
David R. Pratt ◽  
Shirley M. Pratt ◽  
Paul T. Barham ◽  
Randall E. Barker ◽  
Marianne S. Waldrop ◽  
...  

This paper examines the representation of humans in large-scale, networked virtual environments. Previous work done in this field is summarized, and existing problems with rendering, articulating, and networking numerous human figures in real time are explained. We have developed a system that integrates together some well-known solutions along with new ideas. Models with multiple level of details, body-tracking technology and animation libraries to specify joint angles, efficient group representations to describe multiple humans, and hierarchical network protocols have been successfully employed to increase the number of humans represented, system performance, and user interactivity. The resulting system immerses participants effectively and has numerous useful applications.


Author(s):  
Jerry Jen-Hung Tsai ◽  
Jeff WT Kan ◽  
Xiangyu Wang ◽  
Yingsiu Huang

This chapter presents a study on the impact of design scales on collaborations in 3D virtual environments. Different domains require designers to work on different scales; for instance, urban design and electronic circuit design operate at very different scales. However, the understanding of the effects of scales upon collaboration in virtual environment is limited. In this chapter, the authors propose to use protocol analysis method to examine the differences between two design collaboration projects in virtual environments: one large scale, and another small scale within a similar domain. It shows that the difference in scale impacted more on communication control and social communication.


2010 ◽  
pp. 180-193 ◽  
Author(s):  
F. Steinicke ◽  
G. Bruder ◽  
J. Jerald ◽  
H. Frenz

In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.


Author(s):  
Filipe Gaspar ◽  
Rafael Bastos ◽  
Miguel Sales

In large-scale immersive virtual reality (VR) environments, such as a CAVE, one of the most common problems is tracking the position of the user’s head while he or she is immersed in this environment to reflect perspective changes in the synthetic stereoscopic images. In this paper, the authors describe the theoretical foundations and engineering approach adopted in the development of an infrared-optical tracking system designed for large scale immersive Virtual Environments (VE) or Augmented Reality (AR) settings. The system is capable of tracking independent retro-reflective markers arranged in a 3D structure in real time, recovering all possible 6DOF. These artefacts can be adjusted to the user’s stereo glasses to track his or her head while immersed or used as a 3D input device for rich human-computer interaction (HCI). The hardware configuration consists of 4 shutter-synchronized cameras attached with band-pass infrared filters and illuminated by infrared array-emitters. Pilot lab results have shown a latency of 40 ms when simultaneously tracking the pose of two artefacts with 4 infrared markers, achieving a frame-rate of 24.80 fps and showing a mean accuracy of 0.93mm/0.51º and a mean precision of 0.19mm/0.04º, respectively, in overall translation/rotation, fulfilling the requirements initially defined.


2020 ◽  
Vol 4 (4) ◽  
pp. 79
Author(s):  
Julian Kreimeier ◽  
Timo Götzelmann

Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.


2017 ◽  
Vol 10 (2) ◽  
Author(s):  
Adrienne Holz Ivory ◽  
James D. Ivory ◽  
Winston Wu ◽  
Anthony M. Limperos ◽  
Nathaniel Andrew ◽  
...  

While the virtual environments of online games can foster healthy relationships and strong communities, some online games are also marred by antisocial and offensive behavior. Such behavior, even when relatively rare, influences the interactions and relationships of users in online communities. Thus, understanding the prevalence and nature of antisocial and offensive behaviors in online games is an important step toward understanding the full spectrum of healthy and unhealthy interactions and relationships in virtual environments. Extensive research has explored video game content produced by game developers, such as violence, profanity, and sexualized portrayals, but much less research has systematically examined potentially problematic content produced by players in online games. While potential effects of antisocial and offensive online game content are not well understood, a first step toward exploring this concern is systematic documentation of offensive user-generated content in online games. To that end, two large-scale content analyses measured a range of offensive user-generated content, including utterances, text, and images, from a total of more than 2,500 users in popular first-person shooter video games. Findings indicated that some content, such as profanity, was frequent among users who spoke during games. More offensive and potentially harmful content, such as racial slurs, was proportionally very rare, but frequent enough to be encountered often by regular players. Results of this initial investigation should be interpreted tentatively, do not suggest that relationships in online shooter games lack healthy elements, and should not be generalized to other online game communities until further research is conducted.* Note: This paper contains strong language which may be offensive to some readers.


Sign in / Sign up

Export Citation Format

Share Document