Effect of long-term training on sound localization performance with spectrally warped and band-limited head-related transfer functions

2013 ◽  
Vol 134 (3) ◽  
pp. 2148-2159 ◽  
Author(s):  
Piotr Majdak ◽  
Thomas Walder ◽  
Bernhard Laback
2018 ◽  
Author(s):  
Axel Ahrens ◽  
Kasper Duemose Lund ◽  
Marton Marschall ◽  
Torsten Dau

AbstractTo achieve accurate spatial auditory perception, subjects typically require personal head-related transfer functions (HRTFs) and the freedom for head movements. Loudspeaker-based virtual sound environments allow for realism without individualized measurements. To study audio-visual perception in realistic environments, the combination of spatially tracked head mounted displays (HMDs), also known as virtual reality glasses, and virtual sound environments may be valuable. However, HMDs were recently shown to affect the subjects’ HRTFs and thus might influence sound localization performance. Furthermore, due to limitations of the reproduction of visual information on the HMD, audio-visual perception might be influenced. Here, a sound localization experiment was conducted both with and without an HMD and with a varying amount of visual information provided to the subjects. Furthermore, interaural time and level difference errors (ITDs and ILDs) as well as spectral perturbations induced by the HMD were analyzed and compared to the perceptual localization data. The results showed a reduction of the localization accuracy when the subjects were wearing an HMD and when they were blindfolded. The HMD-induced error in azimuth localization was found to be larger in the left than in the right hemisphere. Thus, the errors in ITD and ILD can only partly account for the perceptual differences. When visual information of the limited set of source locations was provided, the localization error induced by the HMD was found to be negligible. Presenting visual information of hand-location, room dimensions, source locations and pointing feedback on the HMD revealed similar effects as previously shown in real environments.


2011 ◽  
Vol 32 (3) ◽  
pp. 121-124 ◽  
Author(s):  
Kanji Watanabe ◽  
Ryosuke Kodama ◽  
Sojun Sato ◽  
Shouichi Takane ◽  
Koji Abe

Author(s):  
Hasan Bagbanci ◽  
D. Karmakar ◽  
C. Guedes Soares

The long-term probability distributions of a spar-type and a semisubmersible-type offshore floating wind turbine response are calculated for surge, heave, and pitch motions along with the side-to-side, fore–aft, and yaw tower base bending moments. The transfer functions for surge, heave, and pitch motions for both spar-type and semisubmersible-type floaters are obtained using the fast code and the results are also compared with the results obtained in an experimental study. The long-term predictions of the most probable maximum values of motion amplitudes are used for design purposes, so as to guarantee the safety of the floating wind turbines against overturning in high waves and wind speed. The long-term distribution is carried out using North Atlantic wave data and the short-term floating wind turbine responses are represented using Rayleigh distributions. The transfer functions are used in the procedure to calculate the variances of the short-term responses. The results obtained for both spar-type and semisubmersible-type offshore floating wind turbine are compared, and the study will be helpful in the assessments of the long-term availability and economic performance of the spar-type and semisubmersible-type offshore floating wind turbine.


Author(s):  
Snandan Sharma ◽  
Waldo Nogueira ◽  
A. John van Opstal ◽  
Josef Chalupper ◽  
Lucas H. M. Mens ◽  
...  

Purpose Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. Method Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). Results Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. Conclusions We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485


2000 ◽  
Vol 83 (4) ◽  
pp. 2300-2314 ◽  
Author(s):  
U. Koch ◽  
B. Grothe

To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = −20, −30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in ∼50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = −20) compared with equal stimulation at both ears (IID = 0). In ∼10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.


2014 ◽  
Vol 25 (09) ◽  
pp. 791-803 ◽  
Author(s):  
Evelyne Carette ◽  
Tim Van den Bogaert ◽  
Mark Laureyns ◽  
Jan Wouters

Background: Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. Purpose: In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. Research Design: We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2–3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. Study Sample: A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. Data Collection and Analysis: The participants were positioned in a 13-speaker array (left-right, –90°/+90°) or 7-speaker array (FB, 0–180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. Results were analyzed with repeated-measures analysis of variance. Results: For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Conclusions: Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance.


2019 ◽  
Vol 372 ◽  
pp. 62-68 ◽  
Author(s):  
Martijn J.H. Agterberg ◽  
Ad F.M. Snik ◽  
Rens M.G. Van de Goor ◽  
Myrthe K.S. Hol ◽  
A. John Van Opstal

Sign in / Sign up

Export Citation Format

Share Document