Towards understanding the capability of spatial audio feedback in virtual environments for people with visual impairments

Author(s):  
Miao Dong ◽  
Rongkai Guo
Author(s):  
Ridwan Ahmed Khan ◽  
Myounghoon Jeon ◽  
Tejin Yoon

Performing independent physical exercise is critical to maintain one's good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results.


Author(s):  
Christos Bouras ◽  
Vasileios Triglianos ◽  
Thrasyvoulos Tsiatsos

Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an environment is “EVE Training Area tool” which is supported by “EVE platform”. This tool is a three-dimensional space where participants, represented by three-dimensional humanoid avatars, can use a variety of e-collaboration tools. This paper presents advanced functionality that has been integrated on “EVE Training Area tool” in order to support: (a) multiple collaborative learning techniques (b) Spatial audio conferencing, which is targeted to support principle 3 (augmenting user's representation and awareness). Furthermore the paper presents technological and implementation issues concerning the evolution of “EVE platform” in order to support this functionality.


Author(s):  
Abdeldjallil Naceri ◽  
Thierry Hoinville ◽  
Ryad Chellali ◽  
Jesus Ortiz ◽  
Shannon Hennig

The main objective of this paper is to investigate whether observers are able to perceive depth of virtual objects within virtual environments during reaching tasks. In other words, we tackled the question of observer immersion in a displayed virtual environment. For this purpose, eight observers were asked to reach for a virtual objects displayed within their peripersonal space in two conditions: condition one provided a small virtual sphere that was displayed beyond the subjects index finger as an extension of their hand and condition two provided no visual feedback. In addition, audio feedback was provided when the contact with the virtual object was made in both conditions. Although observers slightly overestimated depth within the peripersonal space, they accurately aimed for the virtual objects based on the kinematics analysis. Furthermore, no significant difference was found concerning the movement between conditions for all observers. Observers accurately targeted the virtual point correctly with regard to time and space. This suggests the virtual environment sufficiently simulated the information normally present in the central nervous system.


Sign in / Sign up

Export Citation Format

Share Document