Speech understanding in adult cochlear implant users

1994 ◽  
Vol 95 (5) ◽  
pp. 2905-2905
Author(s):  
Richard S. Tyler ◽  
Mary Lowder ◽  
George Woodworth ◽  
Aaron Parkinson
2018 ◽  
Vol 39 (5) ◽  
pp. 571-575 ◽  
Author(s):  
Jason A. Brant ◽  
Steven J. Eliades ◽  
Hannah Kaufman ◽  
Jinbo Chen ◽  
Michael J. Ruckenstein

2018 ◽  
Author(s):  
Eline Verschueren ◽  
Ben Somers ◽  
Tom Francart

ABSTRACTThe speech envelope is essential for speech understanding and can be reconstructed from the electroencephalogram (EEG) recorded while listening to running speech. This so-called neural envelope tracking has been shown to relate to speech understanding in normal hearing listeners, but has barely been investigated in persons wearing cochlear implants (CI). We investigated the relation between speech understanding and neural envelope tracking in CI users.EEG was recorded in 8 CI users while they listened to a story. Speech understanding was varied by changing the intensity of the presented speech. The speech envelope was reconstructed from the EEG using a linear decoder and then correlated with the envelope of the speech stimulus as a measure of neural envelope tracking which was compared to actual speech understanding.This study showed that neural envelope tracking increased with increasing speech understanding in every participant. Furthermore behaviorally measured speech understanding was correlated with participant specific neural envelope tracking results indicating the potential of neural envelope tracking as an objective measure of speech understanding in CI users. This could enable objective and automatic fitting of CIs and pave the way towards closed-loop CIs that adjust continuously and automatically to individual CI users.


1984 ◽  
Vol 76 (S1) ◽  
pp. S48-S48
Author(s):  
Ingeborg J. Hochmair‐Desoyer ◽  
Helmut K. Stiglbrunner ◽  
Ernst‐Ludwig Wallenberg

2019 ◽  
Vol 23 ◽  
pp. 233121651988668 ◽  
Author(s):  
Zilong Xie ◽  
Casey R. Gaskins ◽  
Maureen J. Shader ◽  
Sandra Gordon-Salant ◽  
Samira Anderson ◽  
...  

Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.


2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.


2015 ◽  
Vol 322 ◽  
pp. 107-111 ◽  
Author(s):  
Michael F. Dorman ◽  
Sarah Cook ◽  
Anthony Spahr ◽  
Ting Zhang ◽  
Louise Loiselle ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261295
Author(s):  
Florian Langner ◽  
Julie G. Arenberg ◽  
Andreas Büchner ◽  
Waldo Nogueira

Objectives The relationship between electrode-nerve interface (ENI) estimates and inter-subject differences in speech performance with sequential and simultaneous channel stimulation in adult cochlear implant listeners were explored. We investigated the hypothesis that individuals with good ENIs would perform better with simultaneous compared to sequential channel stimulation speech processing strategies than those estimated to have poor ENIs. Methods Fourteen postlingually deaf implanted cochlear implant users participated in the study. Speech understanding was assessed with a sentence test at signal-to-noise ratios that resulted in 50% performance for each user with the baseline strategy F120 Sequential. Two simultaneous stimulation strategies with either two (Paired) or three sets of virtual channels (Triplet) were tested at the same signal-to-noise ratio. ENI measures were estimated through: (I) voltage spread with electrical field imaging, (II) behavioral detection thresholds with focused stimulation, and (III) slope (IPG slope effect) and 50%-point differences (dB offset effect) of amplitude growth functions from electrically evoked compound action potentials with two interphase gaps. Results A significant effect of strategy on speech understanding performance was found, with Triplets showing a trend towards worse speech understanding performance than sequential stimulation. Focused thresholds correlated positively with the difference required to reach most comfortable level (MCL) between Sequential and Triplet strategies, an indirect measure of channel interaction. A significant offset effect (difference in dB between 50%-point for higher eCAP growth function slopes with two IPGs) was observed. No significant correlation was observed between the slopes for the two IPGs tested. None of the measures used in this study correlated with the differences in speech understanding scores between strategies. Conclusions The ENI measure based on behavioral focused thresholds could explain some of the difference in MCLs, but none of the ENI measures could explain the decrease in speech understanding with increasing pairs of simultaneously stimulated electrodes in processing strategies.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Annemarie Ludwig ◽  
Sylvia Meuret ◽  
Rolf-Dieter Battmer ◽  
Marc Schönwiesner ◽  
Michael Fuchs ◽  
...  

Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations.


Sign in / Sign up

Export Citation Format

Share Document