Phoneme recognition by cochlear implant users as a function of signal-to-noise ratio and nonlinear amplitude mapping

1999 ◽  
Vol 106 (2) ◽  
pp. L18-L23 ◽  
Author(s):  
Qian-Jie Fu ◽  
Robert V. Shannon
2020 ◽  
Vol 24 ◽  
pp. 233121652097034
Author(s):  
Florian Langner ◽  
Andreas Büchner ◽  
Waldo Nogueira

Cochlear implant (CI) sound processing typically uses a front-end automatic gain control (AGC), reducing the acoustic dynamic range (DR) to control the output level and protect the signal processing against large amplitude changes. It can also introduce distortions into the signal and does not allow a direct mapping between acoustic input and electric output. For speech in noise, a reduction in DR can result in lower speech intelligibility due to compressed modulations of speech. This study proposes to implement a CI signal processing scheme consisting of a full acoustic DR with adaptive properties to improve the signal-to-noise ratio and overall speech intelligibility. Measurements based on the Short-Time Objective Intelligibility measure and an electrodogram analysis, as well as behavioral tests in up to 10 CI users, were used to compare performance with a single-channel, dual-loop, front-end AGC and with an adaptive back-end multiband dynamic compensation system (Voice Guard [VG]). Speech intelligibility in quiet and at a +10 dB signal-to-noise ratio was assessed with the Hochmair–Schulz–Moser sentence test. A logatome discrimination task with different consonants was performed in quiet. Speech intelligibility was significantly higher in quiet for VG than for AGC, but intelligibility was similar in noise. Participants obtained significantly better scores with VG than AGC in the logatome discrimination task. The objective measurements predicted significantly better performance estimates for VG. Overall, a dynamic compensation system can outperform a single-stage compression (AGC + linear compression) for speech perception in quiet.


2021 ◽  
pp. 019459982110492
Author(s):  
Allan M. Henslee ◽  
Christopher R. Kaufmann ◽  
Matt D. Andrick ◽  
Parker T. Reineke ◽  
Viral D. Tejani ◽  
...  

Objective Electrocochleography (ECochG) is increasingly being used during cochlear implant (CI) surgery to detect and mitigate insertion-related intracochlear trauma, where a drop in ECochG signal has been shown to correlate with a decline in hearing outcomes. In this study, an ECochG-guided robotics-assisted CI insertion system was developed and characterized that provides controlled and consistent electrode array insertions while monitoring and adapting to real-time ECochG signals. Study Design Experimental research. Setting A research laboratory and animal testing facility. Methods A proof-of-concept benchtop study evaluated the ability of the system to detect simulated ECochG signal changes and robotically adapt the insertion. Additionally, the ECochG-guided insertion system was evaluated in a pilot in vivo sheep study to characterize the signal-to-noise ratio and amplitude of ECochG recordings during robotics-assisted insertions. The system comprises an electrode array insertion drive unit, an extracochlear recording electrode module, and a control console that interfaces with both components and the surgeon. Results The system exhibited a microvolt signal resolution and a response time <100 milliseconds after signal change detection, indicating that the system can detect changes and respond faster than a human. Additionally, animal results demonstrated that the system was capable of recording ECochG signals with a high signal-to-noise ratio and sufficient amplitude. Conclusion An ECochG-guided robotics-assisted CI insertion system can detect real-time drops in ECochG signals during electrode array insertions and immediately alter the insertion motion. The system may provide a surgeon the means to monitor and reduce CI insertion–related trauma beyond manual insertion techniques for improved CI hearing outcomes.


2014 ◽  
Vol 25 (10) ◽  
pp. 952-968 ◽  
Author(s):  
Stephen Julstrom ◽  
Linda Kozma-Spytek

Background: In order to better inform the development and revision of the American National Standards Institute C63.19 and American National Standards Institute/Telecommunications Industry Association-1083 hearing aid compatibility standards, a previous study examined the signal strength and signal (speech)-to-noise (interference) ratio needs of hearing aid users when using wireless and cordless phones in the telecoil coupling mode. This study expands that examination to cochlear implant (CI) users, in both telecoil and microphone modes of use. Purpose: The purpose of this study was to evaluate the magnetic and acoustic signal levels needed by CI users for comfortable telephone communication and the users’ tolerance relative to the speech levels of various interfering wireless communication–related noise types. Research Design: Design was a descriptive and correlational study. Simulated telephone speech and eight interfering noise types presented as continuous signals were linearly combined and were presented together either acoustically or magnetically to the participants’ CIs. The participants could adjust the loudness of the telephone speech and the interfering noises based on several assigned criteria. Study Sample: The 21 test participants ranged in age from 23–81 yr. All used wireless phones with their CIs, and 15 also used cordless phones at home. There were 12 participants who normally used the telecoil mode for telephone communication, whereas 9 used the implant’s microphone; all were tested accordingly. Data Collection and Analysis: A guided-intake questionnaire yielded general background information for each participant. A custom-built test control box fed by prepared speech-and-noise files enabled the tester or test participant, as appropriate, to switch between the various test signals and to precisely control the speech-and-noise levels independently. The tester, but not the test participant, could read and record the selected levels. Subsequent analysis revealed the preferred speech levels, speech (signal)-to-noise ratios, and the effect of possible noise-measurement weighting functions. Results: The participants' preferred telephone speech levels subjectively matched or were somewhat lower than the level that they heard from a 65 dB SPL wideband reference. The mean speech (signal)-to-noise ratio requirement for them to consider their telephone experience “acceptable for normal use” was 20 dB, very similar to the results for the hearing aid users of the previous study. Significant differences in the participants’ apparent levels of noise tolerance among the noise types when the noise level was determined using A-weighting were eliminated when a CI-specific noise-measurement weighting was applied. Conclusions: The results for the CI users in terms of both preferred levels for wireless and cordless phone communication and signal-to-noise requirements closely paralleled the corresponding results for hearing aid users from the previous study, and showed no significant differences between the microphone and telecoil modes of use. Signal-to-noise requirements were directly related to the participants’ noise audibility threshold and were independent of noise type when appropriate noise-measurement weighting was applied. Extending the investigation to include noncontinuous interfering noises and forms of radiofrequency interference other than additive audiofrequency noise could be areas of future study.


2021 ◽  
Vol 25 ◽  
pp. 233121652110141
Author(s):  
Anja Eichenauer ◽  
Uwe Baumann ◽  
Timo Stöver ◽  
Tobias Weissgerber

Clinical speech perception tests with simple presentation conditions often overestimate the impact of signal preprocessing on speech perception in complex listening environments. A new procedure was developed to assess speech perception in interleaved acoustic environments of different complexity that allows investigation of the impact of an automatic scene classification (ASC) algorithm on speech perception. The procedure was applied in cohorts of normal hearing (NH) controls and uni- and bilateral cochlear implant (CI) users. Speech reception thresholds (SRTs) were measured by means of a matrix sentence test in five acoustic environments that included different noise conditions (amplitude modulated and continuous), two spatial configurations, and reverberation. The acoustic environments were encapsulated in a randomized, mixed order single experimental run. Acoustic room simulation was played back with a loudspeaker auralization setup with 128 loudspeakers. 18 NH, 16 unilateral, and 16 bilateral CI users participated. SRTs were evaluated for each individual acoustic environment and as mean-SRT. Mean-SRTs improved by 2.4 dB signal-to-noise ratio for unilateral and 1.3 dB signal-to-noise ratio for bilateral CI users with activated ASC. Without ASC, the mean-SRT of bilateral CI users was 3.7 dB better than the SRT of unilateral CI users. The mean-SRT indicated significant differences, with NH group performing best and unilateral CI users performing worse with a difference of up to 13 dB compared to NH. The proposed speech test procedure successfully demonstrated that speech perception and benefit with ASC depend on the acoustic environment.


Sign in / Sign up

Export Citation Format

Share Document