scholarly journals Auditory Displays to Facilitate Object Targeting in 3D Space

Author(s):  
Keenan R. May ◽  
Briana Sobel ◽  
Jeff Wilson ◽  
Bruce N. Walker

In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting “area cue,” (2) ongoing “proximity feedback,” or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.

Author(s):  
Robert A. King ◽  
Gregory M. Corso

Pilots often turn off the auditory displays which are provided to improve their performance (Weiner, 1977; Veitengruber, Boucek, & Smith, 1977). The intensity of the auditory display is often cited as a possible cause of this behavior (Cooper, 1977). However, the processing of the additional information is a concurrent task demand which may increase subjective workload (Wickens & Yeh, 1983; McCloy, Derrick, & Wickens, 1983). Pilots may attempt to reduce subjective workload at the expense of performance by turning off the auditory display. Forty undergraduate males performed a visual search task. Three conditions: auditory display on, auditory display off, and subject's choice were run in combination with nine levels of visual display load. The auditory display, a 4000 Hz tone with a between-subject intensity of 60 dB(A), 70 dB(A), 80 dB(A), and 90 dB(A), indicated that the target letter was in the lower half of the search area. NASA-TLX (Task Load Index) was used to measure the subjective workload of the subjects after each block of trials (Hart & Staveland, 1988). A non-monotonic relationship was found between auditory display intensity and auditory display usage. Evidence was found that the auditory display increased some aspects of subjective workload– physical demands and frustration. Furthermore, there was a dissociation of performance and subjective workload in the manner predicted by Wickens – Yeh (1983). The implications of these results for display design are discussed.


Author(s):  
Myounghoon Jeon

While design theories in visual displays have been well developed and further refined, relatively little research has been conducted on design theories and models in auditory displays. The existing discussions mainly account for functional mappings between sounds and referents, but these do not fully address design aspects of auditory displays. To bridge the gap, the present proposal focuses on design affordances in sound design among many design constructs. To this end, the definition and components of design affordances are briefly explored, followed by the auditory display examples of those components to gauge whether sound can deliver perceived affordances in interactive products. Finally, other design constructs, such as feedback and signifier, are discussed together with future work. This exploratory proposal is expected to contribute to elaborating sound design theory and practice.


Author(s):  
Bartholomew Elias

The effects of a dynamic auditory preview display were examined in a visual target aiming task. A moving sound stimulus aligned with a visual target was presented over various distances beyond the bounds of a visual display. Results indicated reduced error magnitudes in aimed responses to visual targets with increasing auditory preview distance. In subsequent testing, the effects of position and velocity misalignments between the sound source and the visual target were assessed. In position misalignment conditions where the sound source lagged behind the visual target, higher error magnitudes were observed. However, when the auditory display preceded the visual target, performance improved. In velocity mismatch conditions, responses toward fast moving targets improved when a relatively faster sound source was previewed but were disrupted when a slower sound source was previewed. On the contrary, responses toward slow moving targets improved when a relatively slower sound source was previewed and were disrupted when a faster sound source was previewed.


Author(s):  
Ellen C. Haas ◽  
Rene de Pontbriand ◽  
Robert Mello ◽  
John Patton ◽  
Alexander Solounias

The purpose of this study was to determine the extent to which different types of audio display technology affected the ability of the physically active, load-carrying dismounted soldier to understand and respond to multiple radio communications in the battlefield. Independent variables were different types of auditory display configuration (existing monaural and spatial audio), number of simultaneous talkers in each simulated radio message (two, three, or four), and soldier rucksack load (22 kg or 33 kg). The dependent variables included the response time and number of accurate responses to the radio messages, soldier ratings of mental workload, and soldier physiological workload. Subjects were nine (9) male Marine Corps Infantry personnel and three (3) male Army Infantry personnel. Results indicated that spatial auditory displays enabled soldiers to identify a significantly greater number of simulated radio communications, and respond to these communications more quickly. Message response time increased and identification accuracy decreased as the number of simultaneous talkers increased. Rucksack weight was a predominant variable in physical and mental workload. Soldiers showed significantly greater physiological energy expenditure and significantly greater mental workload when they carried the heavier rucksack. The results indicated that whatever the load carried by the soldier, the speed and accuracy of understanding and responding to multiple radio communications were enhanced by presentation in different spatial locations.


1996 ◽  
Vol 5 (3) ◽  
pp. 290-301 ◽  
Author(s):  
Claudia Hendrix ◽  
Woodrow Barfield

Two studies were performed to investigate the sense of presence within stereoscopic virtual environments as a function of the addition or absence of auditory cues. The first study examined the presence or absence of spatialized sound, while the second study compared the use of nonspatialized sound to spatialized sound. Sixteen subjects were allowed to navigate freely throughout several virtual environments and for each virtual environment, their level of presence, the virtual world realism, and interactivity between the participant and virtual environment were evaluated using survey questions. The results indicated that the addition of spatialized sound significantly increased the sense of presence but not the realism of the virtual environment. Despite this outcome, the addition of a spatialized sound source significantly increased the realism with which the subjects interacted with the sound source, and significantly increased the sense that sounds emanated from specific locations within the virtual environment. The results suggest that, in the context of a navigation task, while presence in virtual environments can be improved by the addition of auditory cues, the perceived realism of a virtual environment may be influenced more by changes in the visual rather than auditory display media. Implications of these results for presence within auditory virtual environments are discussed.


Author(s):  
Ivica Ico Bukvic ◽  
Gregory Earle ◽  
Disha Sardana ◽  
Woohun Joo

The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to hu-man spatial aural perception, and to validate existing relation-ships. Our infrastructure consists of an intuitive interaction in-terface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the system’s unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such an approach may be seen as a viable alternative.


2021 ◽  
Vol 2 ◽  
Author(s):  
Richard Skarbez ◽  
Missie Smith ◽  
Mary C. Whitton

Since its introduction in 1994, Milgram and Kishino's reality-virtuality (RV) continuum has been used to frame virtual and augmented reality research and development. While originally, the RV continuum and the three dimensions of the supporting taxonomy (extent of world knowledge, reproduction fidelity, and extent of presence metaphor) were intended to characterize the capabilities of visual display technology, researchers have embraced the RV continuum while largely ignoring the taxonomy. Considering the leaps in technology made over the last 25 years, revisiting the RV continuum and taxonomy is timely. In reexamining Milgram and Kishino's ideas, we realized, first, that the RV continuum is actually discontinuous; perfect virtual reality cannot be reached. Secondly, mixed reality is broader than previously believed, and, in fact, encompasses conventional virtual reality experiences. Finally, our revised taxonomy adds coherence, accounting for the role of users, which is critical to assessing modern mixed reality experiences. The 3D space created by our taxonomy incorporates familiar constructs such as presence and immersion, and also proposes new constructs that may be important as mixed reality technology matures.


Author(s):  
Doon MacDonald ◽  
Tony Stockman

This paper presents SoundTrAD, a method and tool for designing auditory displays for the user interface. SoundTrAD brings together ideas from user interface design and soundtrack composition and supports novice auditory display designers in building an auditory user interface. The paper argues for the need for such a method before going on to describe the fundamental structure of the method and construction of the supporting tools. The second half of the paper applies SoundTrAD to an autonomous driving scenario and demonstrates its use in prototyping ADs for a wide range of scenarios.


Sign in / Sign up

Export Citation Format

Share Document