scholarly journals AutoAdaptive: A Noise Level–Sensitive Beamformer for MED EL Cochlear Implant Patients

2019 ◽  
Vol 30 (08) ◽  
pp. 731-734
Author(s):  
Michael F. Dorman ◽  
Sarah Cook Natale

AbstractWhen cochlear implant (CI) listeners use a directional microphone or beamformer system to improve speech understanding in noise, the gain in understanding for speech presented from the front of the listener coexists with a decrease in speech understanding from the back. One way to maximize the usefulness of these systems is to keep a microphone in the omnidirectional mode in low noise and then switch to directional mode in high noise.The purpose of this experiment was to assess the levels of speech understanding in noise allowed by a new signal processing algorithm for MED EL CIs, AutoAdaptive, which operates in the manner described previously.Seven listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant with speech presented from the front and from the back at three noise levels, 45, 55, and 65 dB SPL.The listeners were seated in the middle of an array of eight loudspeakers. Sentences from the AzBio sentence lists were presented from loudspeakers at 0 or 180° azimuth. Restaurant noise at 45, 55, and 65 dB SPL was presented from all eight loudspeakers. The speech understanding scores (words correct) were subjected to a two-factor (speaker location and noise level), repeated measures, analysis of variance with posttests.The analysis of variance showed a main effect for level and location and a significant interaction. Posttests showed that speech understanding scores from front and back loudspeakers did not differ significantly at the 45- and 55-dB noise levels but did differ significantly at the 65-dB noise level—with increased scores for signals from the front and decreased scores for signals from the back.The AutoAdaptive feature provides omnidirectional benefit at low noise levels, i.e., similar levels of speech understanding for talkers in front of, and in back of, a listener and beamformer benefit at higher noise levels, i.e., increased speech understanding for signals from in front. The automatic switching feature will be of value to the many patients who prefer not to manually switch programs on their CIs.

Author(s):  
Jourdan T. Holder ◽  
Adrian L. Taylor ◽  
Linsey W. Sunderhaus ◽  
Rene H. Gifford

Background: Despite improvements in cochlear implant (CI) technology, pediatric CI recipients continueto have more difficulty understanding speech than their typically hearing peers in background noise. Avariety of strategies have been evaluated to help mitigate this disparity, such as signal processing, remotemicrophone technology, and microphone placement. Previous studies regarding microphoneplacement used speech processors that are now dated, and most studies investigating the improvementof speech recognition in background noise included adult listeners only.Purpose: The purpose of the present study was to investigate the effects of microphone location andbeamforming technology on speech understanding for pediatric CI recipients in noise.Research Design: A prospective, repeated-measures, within-participant design was used to compareperformance across listening conditions.Study Sample: A total of nine children (aged 6.6 to 15.3 years) with at least one Advanced Bionics CIwere recruited for this study.Data Collection and Analysis: The Basic English Lexicon Sentences and AzBio Sentences were presentedat 0° azimuth at 65-dB SPL in +5 signal-to-noise ratio noise presented from seven speakers usingthe R-SPACE system (Advanced Bionics, Valencia, CA). Performance was compared across three omnidirectionalmicrophone configurations (processor microphone, T-Mic 2, and processor + T-Mic 2) andtwo directional microphone configurations (UltraZoom and auto UltraZoom). The two youngest participantswere not tested in the directional microphone configurations.Results: No significant differences were found between the various omnidirectional microphone configurations.UltraZoom provided significant benefit over all omnidirectional microphone configurations(T-Mic 2, p = 0.004, processor microphone, p < 0.001, and processor microphone + T-Mic 2, p = 0.018)but was not significantly different from auto UltraZoom (p = 0.176).Conclusions: All omnidirectional microphone configurations yielded similar performance, suggesting thata child’s listening performance in noise will not be compromised by choosing the microphone configurationbest suited for the child. UltraZoom (adaptive beamformer) yielded higher performance than all omnidirectional microphonesin moderate background noise for adolescents aged 9 to 15 years. The implicationsof these data suggest that for older children who are able to reliably use manual controls, UltraZoom willyield significantly higher performance in background noise when the target is in front of the listener.


2015 ◽  
Vol 26 (06) ◽  
pp. 532-539 ◽  
Author(s):  
Jace Wolfe ◽  
Mila Morais ◽  
Erin Schafer

Background: Cochlear implant (CI) recipients experience difficulty understanding speech in noise. Remote-microphone technology that improves the signal-to-noise ratio is recognized as an effective means to improve speech recognition in noise; however, there are no published studies evaluating the potential benefits of a wireless, remote-microphone, digital, audio-streaming accessory device (heretofore referred to as a remote-microphone accessory) designed to deliver audio signals directly to a CI sound processor. Purpose: The objective of this study was to compare speech recognition in quiet and in noise of recipients while using their CI alone and with a remote-microphone accessory. Research Design: A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in increasing levels of competing noise with the CI sound processor alone and with the sound processor paired to the remote microphone accessory. Study Sample: Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Data Collection and Analysis: Participants were evaluated in 14 conditions including use of the sound processor alone and with the remote-microphone accessory in quiet and at the following signal levels: 65 dBA speech (at the location of the participant; 85 dBA at the location of the remote microphone) in quiet and competing noise at 50, 55, 60, 65, 70, and 75 dBA noise levels. Speech recognition was evaluated in each of these conditions with one full list of AzBio sentences. Results: Speech recognition in quiet and in all competing noise levels, except the 75 dBA condition, was significantly better with use of the remote-microphone accessory compared with participants’ performance with the CI sound processor alone. As expected, in all technology conditions, performance was significantly poorer as the competing noise level increased. Conclusions: Use of a remote-microphone accessory designed for a CI sound processor provides superior speech recognition in quiet and in noise when compared with performance obtained with the CI sound processor alone.


2019 ◽  
Vol 30 (08) ◽  
pp. 659-671 ◽  
Author(s):  
Ashley Zaleski-King ◽  
Matthew J. Goupell ◽  
Dragana Barac-Cikoja ◽  
Matthew Bakke

AbstractBilateral inputs should ideally improve sound localization and speech understanding in noise. However, for many bimodal listeners [i.e., individuals using a cochlear implant (CI) with a contralateral hearing aid (HA)], such bilateral benefits are at best, inconsistent. The degree to which clinically available HA and CI devices can function together to preserve interaural time and level differences (ITDs and ILDs, respectively) enough to support the localization of sound sources is a question with important ramifications for speech understanding in complex acoustic environments.To determine if bimodal listeners are sensitive to changes in spatial location in a minimum audible angle (MAA) task.Repeated-measures design.Seven adult bimodal CI users (28–62 years). All listeners reported regular use of digital HA technology in the nonimplanted ear.Seven bimodal listeners were asked to balance the loudness of prerecorded single syllable utterances. The loudness-balanced stimuli were then presented via direct audio inputs of the two devices with an ITD applied. The task of the listener was to determine the perceived difference in processing delay (the interdevice delay [IDD]) between the CI and HA devices. Finally, virtual free-field MAA performance was measured for different spatial locations both with and without inclusion of the IDD correction, which was added with the intent to perceptually synchronize the devices.During the loudness-balancing task, all listeners required increased acoustic input to the HA relative to the CI most comfortable level to achieve equal interaural loudness. During the ITD task, three listeners could perceive changes in intracranial position by distinguishing sounds coming from the left or from the right hemifield; when the CI was delayed by 0.73, 0.67, or 1.7 msec, the signal lateralized from one side to the other. When MAA localization performance was assessed, only three of the seven listeners consistently achieved above-chance performance, even when an IDD correction was included. It is not clear whether the listeners who were able to consistently complete the MAA task did so via binaural comparison or by extracting monaural loudness cues. Four listeners could not perform the MAA task, even though they could have used a monaural loudness cue strategy.These data suggest that sound localization is extremely difficult for most bimodal listeners. This difficulty does not seem to be caused by large loudness imbalances and IDDs. Sound localization is best when performed via a binaural comparison, where frequency-matched inputs convey ITD and ILD information. Although low-frequency acoustic amplification with a HA when combined with a CI may produce an overlapping region of frequency-matched inputs and thus provide an opportunity for binaural comparisons for some bimodal listeners, our study showed that this may not be beneficial or useful for spatial location discrimination tasks. The inability of our listeners to use monaural-level cues to perform the MAA task highlights the difficulty of using a HA and CI together to glean information on the direction of a sound source.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Jinmiao Fang ◽  
Jinsong Tu ◽  
Kunming Wu

To establish evaluation criteria for the pavement skid resistance and noise level in tunnels pavements, the zoning and control standards for skid resistance and concrete pavement noise were examined. Transverse friction coefficient (TFC) test equipment and the on-board sound intensity (OBSI) method were used to evaluate the antisliding characteristics and noise levels of several tunnel pavements. The results indicated poor antisliding characteristics and noise levels in ordinary grooved cement concrete pavement, whereas new types of cement concrete pavements, such as exposed concrete pavements and polymer-modified cement concrete pavements, had good antisliding characteristics and achieved low noise levels. Combined with the cluster analysis method, a zoning method for the antisliding and noise level in concrete pavement is proposed. The antisliding characteristics and noise levels of the pavement are divided into three zones. To ensure safety and comfort during driving, the antisliding value (SFC) of the tunnel pavement should be more than 50, and the noise level should not exceed 105 dB. Finally, the correlation between the antisliding and noise levels for pavement was analyzed. The results indicated that the antiskiding value of pavement has a strong correlation to the noise level.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 163 ◽  
Author(s):  
Emir Turajlic ◽  
Alen Begović ◽  
Namir Škaljo

The blind additive white Gaussian noise level estimation is an important and a challenging area of digital image processing with numerous applications including image denoising and image segmentation. In this paper, a novel block-based noise level estimation algorithm is proposed. The algorithm relies on the artificial neural network to perform a complex image patch analysis in the singular value decomposition (SVD) domain and to evaluate noise level estimates. The algorithm exhibits the capacity to adjust the effective singular value tail length with respect to the observed noise levels. The results of comparative analysis show that the proposed ANN-based algorithm outperforms the alternative single stage block-based noise level estimating algorithm in the SVD domain in terms of mean square error (MSE) and average error for all considered choices of block size. The most significant improvements in MSE levels are obtained at low noise levels. For some test images, such as “Car” and “Girlface”, at σ = 1 , these improvements can be as high as 99% and 98.5%, respectively. In addition, the proposed algorithm eliminates the error-prone manual parameter fine-tuning and automates the entire noise level estimation process.


2008 ◽  
Vol 19 (06) ◽  
pp. 481-495 ◽  
Author(s):  
Jeffrey Weihing ◽  
Frank E. Musiek

Background: A common complaint of patients with (central) auditory processing disorder is difficulty understanding speech in noise. Because binaural hearing improves speech understanding in compromised listening situations, quantifying this ability in different levels of noise may yield a measure with high clinical utility. Purpose: To examine binaural enhancement (BE) and binaural interaction (BI) in different levels of noise for the auditory brainstem response (ABR) and middle latency response (MLR) in a normal hearing population. Research Design: An experimental study in which subjects were exposed to a repeated measures design. Study Sample: Fifteen normal hearing female adults served as subjects. Normal hearing was assessed by pure-tone audiometry and otoacoustic emissions. Intervention: All subjects were exposed to 0, 20, and 35 dB effective masking (EM) of white noise during monotic and diotic click stimulation. Data Collection and Analysis: ABR and MLR responses were simultaneously acquired. Peak amplitudes and latencies were recorded and compared across conditions using a repeated measures analysis of variance (ANOVA). Results: For BE, ABR results showed enhancement at 0 and 20 dB EM, but not at 35 dB EM. The MLR showed BE at all noise levels, but the degree of BE decreased with increasing noise level. For BI, both the ABR and MLR showed BI at all noise levels. However, the degree of BI again decreased with increasing noise level for the MLR. Conclusions: The results demonstrate the ability to measure BE simultaneously in the ABR and MLR in up to 20 dB of EM noise and BI in up to 35 dB EM of noise. Results also suggest that ABR neural generators may respond to noise differently than MLR generators.


Author(s):  
J. Matthews ◽  
J. D. C. Talamo

A high incidence of hearing loss has been encountered among tractor drivers, and noise levels are shown to be further increased by the addition of cabs, particularly those which are structurally strong to resist crushing if the vehicle overturns. Some reductions in the noise level of the operator's environment can be obtained by covering the engine or by exhaust system modifications, while possible future improvements to diesel engine design may effect a significant improvement. However, it is proposed that noise reduction is likely to be achieved by attention to acoustic features of the operator's cab. The inclusion of resilient mounts, substantial floors and bulkheads, and acoustically absorbent linings are all shown to provide worthwhile improvements and, in combination, these measures can reduce noise levels from more than 100 dBA to 90 dBA or less. Where the tractor is fitted with a safety frame only, a low noise fabric cladding is shown to be feasible.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Annemarie Ludwig ◽  
Sylvia Meuret ◽  
Rolf-Dieter Battmer ◽  
Marc Schönwiesner ◽  
Michael Fuchs ◽  
...  

Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations.


2020 ◽  
Vol 81 (1-4) ◽  
pp. 17-23
Author(s):  
P.A. Cucis ◽  
C. Berger-Vachon ◽  
R. Hermann ◽  
H. Thaï-Van ◽  
S. Gallego ◽  
...  

The cochlear implant is the most successful implantable device for the rehabilitation of profound deafness. However, in some cases, the electrical stimulation delivered by the electrode can spread inside the cochlea creating overlap and interaction between frequency channels. By using channel-selection algorithms like the “nofm” coding-strategy, channel interaction can be reduced. This paper describes the preliminary results of experiments conducted with normal hearing subjects (n = 9). Using a vocoder, the present study simulated the hearing through a cochlear implant. Speech understanding in noise was measured by varying the number of selected channels (“nofm”: 4, 8, 12 and 16of20) and the degree of simulated channel interaction (“Low”, “Medium”, “High”). Also, with the vocoder, we evaluated the impact of simulated channel interaction on frequency selectivity by measuring psychoacoustic tuning curves. The results showed a significant average effect of the signal-to-noise ratio (p < 0.0001), the degree of channel interaction (p < 0.0001) and the number of selected channels, (p = 0.029). The highest degree of channel interaction significantly decreases intelligibility as well as frequency selectivity. These results underline the importance of measuring channel interaction for cochlear implanted patients to have a prognostic test and to adjust fitting methods in consequence. The next step of this project will be to transpose these experiments to implant users, to support our results.


Sign in / Sign up

Export Citation Format

Share Document