scholarly journals A Bone Conduction Based Spatial Auditory Display as Part of a Wearable Hybrid Interface

Author(s):  
Amit Barde Brock ◽  
Matt Ward ◽  
William S. Helton ◽  
Mark Billinghurst

Attention redirection trials were carried out using a wearable interface incorporating auditory and visual cues. Visual cues were delivered via the screen on the Recon Jet – a wearable computer resembling a pair of glasses – while auditory cues were delivered over a bone conduction headset. Cueing conditions included the delivery of individual cues, both auditory and visual, and in combination with each other. Results indicate that the use of an auditory cue drastically decreases target acquisition times. This is true especially for targets that fall outside the visual field of view. While auditory cues showed no difference when paired with any of the visual cueing conditions for targets within the field of view of the user, for those outside the field of view a significant improvement in performance was observed. The static visual cue paired with the binaurally spatialised, dynamic auditory cue appeared to provide the best performance in comparison to any other cueing conditions. In the absence of a visual cue, the binaurally spatialised, dynamic auditory cue performed the best.

Author(s):  
Jaime C. Auton ◽  
Mark W. Wiggins ◽  
Daniel Sturman

Within high-risk operational environments, expertise has typically been associated with a greater capacity to extract and utilize task-relevant visual cues during situation assessment. However, a limitation of this literature is its exclusive focus on operators’ use of visual cues, even though cues from other modalities (such as auditory cues) are frequently engaged during this assessment process. Arguably, if the capacity for cue utilization is an underlying skill, those operators who have a greater capacity to use visual cues would also have developed a more nuanced repertoire of non-visual cues. Within the context of electricity distribution control, the current study recruited network operators ( N=89) from twelve Australian Distributed Network Service Providers. Using an online experimental platform, participants’ visual cue utilization was assessed using an online situational judgement test (EXPERTise 2.0). Participants also completed the Auditory Readback Task which assessed their capacity to utilize various auditory cues (namely, final rising intonation, fillers, readback accuracy) when recognising nonunderstandings. The results showed a partial relationship between operator capacity for visual and auditory cue utilization. The outcomes of the current research have practical implications for the design of cue-based training interventions to increase the recognition of communication-related errors within distributed environments.


1987 ◽  
Vol 31 (12) ◽  
pp. 1398-1402 ◽  
Author(s):  
Gloria L. Calhoun ◽  
German Valencia ◽  
Thomas A. Furness

A three-dimensional (3-D) auditory display can increase the pilot's situational awareness without requiring visual fixation. When visual acquisition is required, the directional sound can give the pilot a more rapid cue to aim the eyes or head. In order to determine the utility and performance of a 3-D auditory display for cockpit applications, a method for generating 3-D auditory cues is required for simulation. Two laboratory systems are described which create, from monaural stimuli, binaural stimuli which can be perceived as localized and stabilized in space, regardless of the listener's head position. Additionally, preliminary results of the localization performance with one approach are presented.


Author(s):  
Clayton Rothwell ◽  
Griffin Romigh ◽  
Brian Simpson

As visual display complexity grows, visual cues and alerts may become less salient and therefore less effective. Although the auditory system’s resolution is rather coarse relative to the visual system, there is some evidence for virtual spatialized audio to benefit visual search on a small frontal region, such as a desktop monitor. Two experiments examined if search times could be reduced compared to visual-only search through spatial auditory cues rendered using one of two methods: individualized or generic head-related transferfunctions. Results showed the cue type interacted with display complexity, with larger reductions compared to visual-only search as set size increased. For larger set sizes, individualized cues were significantly better than generic cues overall. Across all set sizes, individualized cues were better than generic cues for cueing eccentric elevations (>±8°). Where performance must be maximized, designers should use individualized virtual audio if at all possible, even in small frontal region within the field of view.


Author(s):  
Amit Barde ◽  
Matt Ward ◽  
William S. Helton ◽  
Mark Billinghurst ◽  
Gun Lee

Visual search performance was studied using auditory cues delivered over a bone conduction headset. Two types of auditory cues were employed to evaluate the effectiveness of such cues in an attention redirection task. Participants were required to locate and shoot targets at one of four locations on a screen when one of the two audio cues was delivered. Reaction and target acquisition times were significantly reduced when the binaurally spatialised cues were used compared to unlocalisable, monophonic cues. This appears to suggest that an auditory cue with directional information is far superior at aiding search tasks or alerting the user to redirect attention in the real-world space in comparison to a centered ‘monophonic’ cue. The results demonstrate the effectiveness of a binaurally spatialised, dynamic cue and point to its potential use in an information rich environment to provide useful and actionable information.


Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


1976 ◽  
Vol 28 (2) ◽  
pp. 193-202 ◽  
Author(s):  
Philip Merikle

Report of single letters from centrally-fixated, seven-letter, target rows was probed by either auditory or visual cues. The target rows were presented for 100 ms, and the report cues were single digits which indicated the spatial location of a letter. In three separate experiments, report was always better with the auditory cues. The advantage for the auditory cues was maintained both when target rows were masked by a patterned stimulus and when the auditory cues were presented 500 ms later than comparable visual cues. The results indicate that visual cues produce modality-specific interference which operates at a level of processing beyond iconic representation.


1978 ◽  
Vol 46 (1) ◽  
pp. 91-94 ◽  
Author(s):  
William J. Wyatt

A profoundly retarded 28-yr.-old female was trained to avoid an aversive but harmless shock to the foot by withdrawing the foot upon presentation of a visual cue. She was later unable to learn to avoid the shock consistently upon presentation of an auditory cue, confirming the ward staff's contention that she had a hearing disability. The audiometric technique using negative reinforcement bridges the problems of (1) difficulty in finding positive reinforcers for patients of low functioning and (2) satiation which may result from the continued use of positive reinforcers. The use of aversive stimuli raises ethical concerns. The growing trend in research is that aversive stimuli are permissible for individuals for whom positive techniques have not been effective and when used by trained professionals under careful review.


Author(s):  
Nada Zwayyid Almutairi ◽  
Eman Salah Ibrahim Rizk

This study explores interactive e-book cues and Information Processing Levels (IPL)’s effectiveness on Learning Retention (LR) and External Cognitive Load (ECL). 117 middle school pupils (MSP) were divided into six experimental groups based on their IPL and cues during the second term of the academic year 2019–2020. Visual Cues (VC)/Audiovisual Cues (VAC) and Auditory Cues (AC)/Audiovisual Cues (VAC) statistically varied in the Ie-book in LR test and ECL scale, same for the average scores when testing the LR in Science for MSP due to the difference between IPL for the DL. There is a statistically significant effect of cue types' interaction in Ie-book with IPL in ECL scale for MSP, at its highest peak in the case of the AVC with DL, followed by the interaction resulting from the VC with DL then AC with SL. Also, cues interaction in Ie-book with IPL immensely affect the LR test for MEP, which is at its highest peak in the case of the AVC with DL. The interactions between (DL–SL) and (AC–VC) seem to equally influence the ELC.


2021 ◽  
Vol 11 ◽  
Author(s):  
Christopher R. Madan ◽  
Anthony Singhal

Learning to play a musical instrument involves mapping visual + auditory cues to motor movements and anticipating transitions. Inspired by the serial reaction time task and artificial grammar learning, we investigated explicit and implicit knowledge of statistical learning in a sensorimotor task. Using a between-subjects design with four groups, one group of participants were provided with visual cues and followed along by tapping the corresponding fingertip to their thumb, while using a computer glove. Another group additionally received accompanying auditory tones; the final two groups received sensory (visual or visual + auditory) cues but did not provide a motor response—all together following a 2 × 2 design. Implicit knowledge was measured by response time, whereas explicit knowledge was assessed using probe tests. Findings indicate that explicit knowledge was best with only the single modality, but implicit knowledge was best when all three modalities were involved.


2006 ◽  
Vol 95 (6) ◽  
pp. 3596-3616 ◽  
Author(s):  
Eiji Hoshi ◽  
Jun Tanji

We examined neuronal activity in the dorsal and ventral premotor cortex (PMd and PMv, respectively) to explore the role of each motor area in processing visual signals for action planning. We recorded neuronal activity while monkeys performed a behavioral task during which two visual instruction cues were given successively with an intervening delay. One cue instructed the location of the target to be reached, and the other indicated which arm was to be used. We found that the properties of neuronal activity in the PMd and PMv differed in many respects. After the first cue was given, PMv neuron response mostly reflected the spatial position of the visual cue. In contrast, PMd neuron response also reflected what the visual cue instructed, such as which arm to be used or which target to be reached. After the second cue was given, PMv neurons initially responded to the cue's visuospatial features and later reflected what the two visual cues instructed, progressively increasing information about the target location. In contrast, the activity of the majority of PMd neurons responded to the second cue with activity reflecting a combination of information supplied by the first and second cues. Such activity, already reflecting a forthcoming action, appeared with short latencies (<400 ms) and persisted throughout the delay period. In addition, both the PMv and PMd showed bilateral representation on visuospatial information and motor-target or effector information. These results further elucidate the functional specialization of the PMd and PMv during the processing of visual information for action planning.


Sign in / Sign up

Export Citation Format

Share Document