scholarly journals Bimodal Cochlear Implant Listeners’ Ability to Perceive Minimal Audible Angle Differences

2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.

2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Annemarie Ludwig ◽  
Sylvia Meuret ◽  
Rolf-Dieter Battmer ◽  
Marc Schönwiesner ◽  
Michael Fuchs ◽  
...  

Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations.


2019 ◽  
Vol 23 ◽  
pp. 233121651984733 ◽  
Author(s):  
Sebastian A. Ausili ◽  
Bradford Backus ◽  
Martijn J. H. Agterberg ◽  
A. John van Opstal ◽  
Marc M. van Wanrooij

Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.


2018 ◽  
Vol 23 (1) ◽  
pp. 32-38 ◽  
Author(s):  
Jantien L. Vroegop ◽  
Nienke C. Homans ◽  
André Goedegebure ◽  
J. Gertjan Dingemanse ◽  
Teun van Immerzeel ◽  
...  

Although the benefit of bimodal listening in cochlear implant users has been agreed on, speech comprehension remains a challenge in acoustically complex real-life environments due to reverberation and disturbing background noises. One way to additionally improve bimodal auditory performance is the use of directional microphones. The objective of this study was to investigate the effect of a binaural beamformer for bimodal cochlear implant (CI) users. This prospective study measured speech reception thresholds (SRT) in noise in a repeated-measures design that varied in listening modality for static and dynamic listening conditions. A significant improvement in SRT of 4.7 dB was found with the binaural beamformer switched on in the bimodal static listening condition. No significant improvement was found in the dynamic listening condition. We conclude that there is a clear additional advantage of the binaural beamformer in bimodal CI users for predictable/static listening conditions with frontal target speech and spatially separated noise sources.


2019 ◽  
Vol 23 (03) ◽  
pp. e276-e280
Author(s):  
Gleide Viviani Maciel Almeida ◽  
Angela Ribas ◽  
Jorge Calleros

Introduction Even people with normal hearing may have difficulties locating a sound source in unfavorable sound environments where competitive noise is intense. Objective To develop, describe, validate and establish the normality curve of the sound localization test. Method The sample consisted of 100 healthy subjects with normal hearing, > 18 years old, who agreed to participate in the study. The sound localization test was applied after the subjects underwent a tonal audiometry exam. For this purpose, a calibrated free field test environment was set up. Then, 30 random pure tones were presented in 2 speakers placed at 45° (on the right and on the left sides of the subject), and the noise was presented from a 3rd speaker, placed at 180°. The noise was presented in 3 hearing situations: optimal listening condition (no noise), noise in relation to 0 dB, and noise in relation to - 10 dB. The subject was asked to point out the side where the pure tone was being perceived, even in the presence of noise. Results All of the 100 participants performed the test in an average time of 99 seconds. The average score was 21, the medium score was 23, and the standard deviation was 3.05. Conclusion The sound localization test proved to be easy to set-up and to apply. The results obtained in the validation of the test suggest that individuals with normal hearing should locate 70% of the presented stimuli. The test can constitute an important instrument in the measurement of noise interference in the ability to locate the sound.


2015 ◽  
Vol 26 (06) ◽  
pp. 532-539 ◽  
Author(s):  
Jace Wolfe ◽  
Mila Morais ◽  
Erin Schafer

Background: Cochlear implant (CI) recipients experience difficulty understanding speech in noise. Remote-microphone technology that improves the signal-to-noise ratio is recognized as an effective means to improve speech recognition in noise; however, there are no published studies evaluating the potential benefits of a wireless, remote-microphone, digital, audio-streaming accessory device (heretofore referred to as a remote-microphone accessory) designed to deliver audio signals directly to a CI sound processor. Purpose: The objective of this study was to compare speech recognition in quiet and in noise of recipients while using their CI alone and with a remote-microphone accessory. Research Design: A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in increasing levels of competing noise with the CI sound processor alone and with the sound processor paired to the remote microphone accessory. Study Sample: Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Data Collection and Analysis: Participants were evaluated in 14 conditions including use of the sound processor alone and with the remote-microphone accessory in quiet and at the following signal levels: 65 dBA speech (at the location of the participant; 85 dBA at the location of the remote microphone) in quiet and competing noise at 50, 55, 60, 65, 70, and 75 dBA noise levels. Speech recognition was evaluated in each of these conditions with one full list of AzBio sentences. Results: Speech recognition in quiet and in all competing noise levels, except the 75 dBA condition, was significantly better with use of the remote-microphone accessory compared with participants’ performance with the CI sound processor alone. As expected, in all technology conditions, performance was significantly poorer as the competing noise level increased. Conclusions: Use of a remote-microphone accessory designed for a CI sound processor provides superior speech recognition in quiet and in noise when compared with performance obtained with the CI sound processor alone.


1989 ◽  
Vol 68 (3) ◽  
pp. 955-962 ◽  
Author(s):  
Heinz Krombholz

The connection between lateral dominance and force of handgrip was investigated by means of a repeated-measures design. 521 children participated. Performance on a paper-and-pencil task and force of handgrip were measured at the beginning of the first year at school and at the end of the first and of the second years at school. On the paper-and-pencil task 84% of the children were classified as right-handers, 8% as left-handers, and 8% as ambidexterous. About 2% of children classified as right-handers at the beginning of the first year at school were classified as left-handers at the end of the second year at school while 18% of left-handers shifted to right-handedness. 52% of children attained their best performance on handgrip with the right hand and 39% with the left hand. No differences could be found either for the right or for the left hand in force of handgrip between right- and left-handed and ambidexterous children. For right-handers, however, the more skilled hand showed superior performance in force of handgrip. These results indicate that left-handers are less strongly handed than right-handers.


2010 ◽  
Vol 21 (07) ◽  
pp. 441-451 ◽  
Author(s):  
René H. Gifford ◽  
Lawrence J. Revit

Background: Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose: To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design: Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample: Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention: Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis: In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results: The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion: Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments.


2015 ◽  
Vol 322 ◽  
pp. 107-111 ◽  
Author(s):  
Michael F. Dorman ◽  
Sarah Cook ◽  
Anthony Spahr ◽  
Ting Zhang ◽  
Louise Loiselle ◽  
...  

2019 ◽  
Vol 30 (07) ◽  
pp. 607-618 ◽  
Author(s):  
Thomas Wesarg ◽  
Susan Arndt ◽  
Konstantin Wiebe ◽  
Frauke Schmid ◽  
Annika Huber ◽  
...  

AbstractPrevious research in cochlear implant (CI) recipients with bilateral severe-to-profound sensorineural hearing loss showed improvements in speech recognition in noise using remote wireless microphone systems. However, to our knowledge, no previous studies have addressed the benefit of these systems in CI recipients with single-sided deafness.The objective of this study was to evaluate the potential improvement in speech recognition in noise for distant speakers in single-sided deaf (SSD) CI recipients obtained using the digital remote wireless microphone system, Roger. In addition, we evaluated the potential benefit in normal hearing (NH) participants gained by applying this system.Speech recognition in noise for a distant speaker in different conditions with and without Roger was evaluated with a two-way repeated-measures design in each group, SSD CI recipients, and NH participants. Post hoc analyses were conducted using pairwise comparison t-tests with Bonferroni correction.Eleven adult SSD participants aided with CIs and eleven adult NH participants were included in this study.All participants were assessed in 15 test conditions (5 listening conditions × 3 noise levels) each. The listening conditions for SSD CI recipients included the following: (I) only NH ear and CI turned off, (II) NH ear and CI (turned on), (III) NH ear and CI with Roger 14, (IV) NH ear with Roger Focus and CI, and (V) NH ear with Roger Focus and CI with Roger 14. For the NH participants, five corresponding listening conditions were chosen: (I) only better ear and weaker ear masked, (II) both ears, (III) better ear and weaker ear with Roger Focus, (IV) better ear with Roger Focus and weaker ear, and (V) both ears with Roger Focus. The speech level was fixed at 65 dB(A) at 1 meter from the speech-presenting loudspeaker, yielding a speech level of 56.5 dB(A) at the recipient's head. Noise levels were 55, 65, and 75 dB(A). Digitally altered noise recorded in school classrooms was used as competing noise. Speech recognition was measured in percent correct using the Oldenburg sentence test.In SSD CI recipients, a significant improvement in speech recognition was found for all listening conditions with Roger (III, IV, and V) versus all no-Roger conditions (I and II) at the higher noise levels (65 and 75 dB[A]). NH participants significantly benefited from the application of Roger in noise for higher levels, too. In both groups, no significant difference was detected between any of the different listening conditions at 55 dB(A) competing noise. There was also no significant difference between any of the Roger conditions III, IV, and V across all noise levels.The application of the advanced remote wireless microphone system, Roger, in SSD CI recipients provided significant benefits in speech recognition for distant speakers at higher noise levels. In NH participants, the application of Roger also produced a significant benefit in speech recognition in noise.


2011 ◽  
Vol 22 (09) ◽  
pp. 623-632 ◽  
Author(s):  
René H. Gifford ◽  
Amy P. Olund ◽  
Melissa DeJong

Background: Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. Purpose: To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Research Design: Single subject, repeated measures design. Study Sample: Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Intervention: Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects’ everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Data Collection and Analysis: Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance—in percent correct—was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. Results: The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. Conclusion: SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program.


Sign in / Sign up

Export Citation Format

Share Document