Speech Recognition in Quiet and Noise in Borderline Cochlear Implant Candidates

2007 ◽  
Vol 18 (10) ◽  
pp. 872-882 ◽  
Author(s):  
Farah Mohd Alkaf ◽  
Jill B. Firszt

The present study 1) examined speech recognition at three intensity levels and in noise for adults with bilateral hearing loss who wore amplification and were referred for cochlear implant evaluation but did not meet current audiological criteria, and 2) compared their performance to cochlear implant recipients using current implant technology. When tested at 70 dB SPL, hearing aid subjects' word and sentence recognition scores were similar to or greater than the scores of cochlear implant recipients. Compared to their implanted peers, however, subjects' scores were significantly poorer at normal (60 dB SPL) and soft (50 dB SPL) presentation levels for words and at soft levels for sentences; detection thresholds were also significantly poorer at 1000 Hz and above. The assessment of candidates at louder-than-normal levels (i.e., 70 dB SPL) may not correctly portray their day-to-day communication struggles. El presente estudio examina, (1) el reconocimiento del lenguaje a tres niveles de intensidad en adultos con hipoacusia bilateral que usaban amplificación y que fueron referidos para una evaluación de implante coclear, pero que no cumplieron con los criterios audiológicos actuales, y (2) compara el desempeño de sujetos implantados utilizando la tecnología actual de implantes. Cuando se evaluaron a 70 dB SPL, los puntajes de los sujetos con audífonos para reconocimiento de palabras y frases fueron similares o mejores que los puntajes de los sujetos con implante coclear. Comparado con los implantados, sin embargo, los puntajes de los sujetos amplificados fueron significativamente peores a niveles de presentación normales (60 dB SPL) y suaves (50 dB SPL), y a niveles bajos para frases; los umbrales de detección fueron también significativamente bajos a 1000 Hz y en frecuencias mayores. La evaluación de candidatos a niveles por encima del normal (p.e., 70 dB SPL) puede no representar correctamente las dificultades de comunicación cotidianas.

2019 ◽  
Vol 40 (3) ◽  
pp. 621-635 ◽  
Author(s):  
Arlene C. Neuman ◽  
Annette Zeman ◽  
Jonathan Neukam ◽  
Binhuan Wang ◽  
Mario A. Svirsky

2019 ◽  
Vol 30 (02) ◽  
pp. 131-144 ◽  
Author(s):  
Erin M. Picou ◽  
Todd A. Ricketts

AbstractPeople with hearing loss experience difficulty understanding speech in noisy environments. Beamforming microphone arrays in hearing aids can improve the signal-to-noise ratio (SNR) and thus also speech recognition and subjective ratings. Unilateral beamformer arrays, also known as directional microphones, accomplish this improvement using two microphones in one hearing aid. Bilateral beamformer arrays, which combine information across four microphones in a bilateral fitting, further improve the SNR. Early bilateral beamformers were static with fixed attenuation patterns. Recently adaptive, bilateral beamformers have been introduced in commercial hearing aids.The purpose of this article was to evaluate the potential benefits of adaptive unilateral and bilateral beamformers for improving sentence recognition and subjective ratings in a laboratory setting. A secondary purpose was to identify potential participant factors that explain some of the variability in beamformer benefit.Participants were fitted with study hearing aids equipped with commercially available adaptive unilateral and bilateral beamformers. Participants completed sentence recognition testing in background noise using three hearing aid settings (omnidirectional, unilateral beamformer, bilateral beamformer) and two noise source configurations (surround, side). After each condition, participants made subjective ratings of their perceived work, desire to control the situation, willingness to give up, and tiredness.Eighteen adults (50–80 yr, M = 66.2, σ = 8.6) with symmetrical mild sloping to severe hearing loss participated.Sentence recognition scores and subjective ratings were analyzed separately using generalized linear models with two within-subject factors (hearing aid microphone and noise configuration). Two benefit scores were calculated: (1) unilateral beamformer benefit (relative to performance with omnidirectional) and (2) additional bilateral beamformer benefit (relative to performance with unilateral beamformer). Hierarchical multiple linear regression was used to determine if beamformer benefit was associated with participant factors (age, degree of hearing loss, unaided speech in noise ability, spatial release from masking, and performance in omnidirectional).Sentence recognition and subjective ratings of work, control, and tiredness were better with both types of beamformers relative to the omnidirectional conditions. In addition, the bilateral beamformer offered small additional improvements relative to the unilateral beamformer in terms of sentence recognition and subjective ratings of tiredness. Speech recognition performance and subjective ratings were generally independent of noise configuration. Performance in the omnidirectional setting and pure-tone average were independently related to unilateral beamformer benefits. Those with the lowest performance or the largest degree of hearing loss benefited the most. No factors were significantly related to additional bilateral beamformer benefit.Adaptive bilateral beamformers offer additional advantages over adaptive unilateral beamformers in hearing aids. The small additional advantages with the adaptive beamformer are comparable to those reported in the literature with static beamformers. Although the additional benefits are small, they positively affected subjective ratings of tiredness. These data suggest that adaptive bilateral beamformers have the potential to improve listening in difficult situations for hearing aid users. In addition, patients who struggle the most without beamforming microphones may also benefit the most from the technology.


2020 ◽  
Vol 31 (01) ◽  
pp. 050-060 ◽  
Author(s):  
Jace Wolfe ◽  
Mila Duke ◽  
Erin Schafer ◽  
Christine Jones ◽  
Lori Rakita ◽  
...  

AbstractChildren with hearing loss often experience difficulty understanding speech in noisy and reverberant classrooms. Traditional remote microphone use, in which the teacher wears a remote microphone that captures her speech and wirelessly delivers it to radio receivers coupled to a child’s hearing aids, is often ineffective for small-group listening and learning activities. A potential solution is to place a remote microphone in the middle of the desk used for small-group learning situations to capture the speech of the peers around the desk and wirelessly deliver the speech to the child’s hearing aids.The objective of this study was to compare speech recognition of children using hearing aids across three conditions: (1) hearing aid in an omnidirectional microphone mode (HA-O), (2) hearing aid with automatic activation of a directional microphone (HA-ADM) (i.e., the hearing aid automatically switches in noisy environments from omnidirectional mode to a directional mode with a cardioid polar plot pattern), and (3) HA-ADM with simultaneous use of a remote microphone (RM) in a “Small Group” mode (HA-ADM-RM). The Small Group mode is designed to pick up multiple near-field talkers. An additional objective of this study was to compare the subjective listening preferences of children between the HA-ADM and HA-ADM-RM conditions.A single-group, repeated measures design was used to evaluate performance differences obtained in the three technology conditions. Sentence recognition in noise was assessed in a classroom setting with each technology, while sentences were presented at a fixed level from three different loudspeakers surrounding a desk (0, 90, and 270° azimuth) at which the participant was seated. This arrangement was intended to simulate a small-group classroom learning activity.Fifteen children with moderate to moderately severe hearing loss.Speech recognition was evaluated in the three hearing technology conditions, and subjective auditory preference was evaluated in the HA-ADM and HA-ADM-RM conditions.The use of the remote microphone system in the Small Group mode resulted in a statistically significant improvement in sentence recognition in noise of 24 and 21 percentage points compared with the HA-O and HA-ADM conditions, respectively (individual benefit ranged from −8.6 to 61.1 and 3.4 to 44 percentage points, respectively). There was not a significant difference in sentence recognition in noise between the HA-O and HA-ADM conditions when the remote microphone system was not in use. Eleven of the 14 participants who completed the subjective rating scale reported at least a slight preference for the use of the remote microphone system in the Small Group mode.Objective and subjective measures of sentence recognition indicated that use of remote microphone technology with the Small Group mode may improve hearing performance in small-group learning activities. Sentence recognition in noise improved by 24 percentage points compared to the HA-O condition, and children expressed a preference for the use of the remote microphone Small Group technology regarding listening comfort, sound quality, speech intelligibility, background noise reduction, and overall listening experience.


2019 ◽  
Vol 62 (10) ◽  
pp. 3834-3850 ◽  
Author(s):  
Todd A. Ricketts ◽  
Erin M. Picou ◽  
James Shehorn ◽  
Andrew B. Dittberner

Purpose Previous evidence supports benefits of bilateral hearing aids, relative to unilateral hearing aid use, in laboratory environments using audio-only (AO) stimuli and relatively simple tasks. The purpose of this study was to evaluate bilateral hearing aid benefits in ecologically relevant laboratory settings, with and without visual cues. In addition, we evaluated the relationship between bilateral benefit and clinically viable predictive variables. Method Participants included 32 adult listeners with hearing loss ranging from mild–moderate to severe–profound. Test conditions varied by hearing aid fitting type (unilateral, bilateral) and modality (AO, audiovisual). We tested participants in complex environments that evaluated the following domains: sentence recognition, word recognition, behavioral listening effort, gross localization, and subjective ratings of spatialization. Signal-to-noise ratio was adjusted to provide similar unilateral speech recognition performance in both modalities and across procedures. Results Significant and similar bilateral benefits were measured for both modalities on all tasks except listening effort, where bilateral benefits were not identified in either modality. Predictive variables were related to bilateral benefits in some conditions. With audiovisual stimuli, increasing hearing loss, unaided speech recognition in noise, and unaided subjective spatial ability were significantly correlated with increased benefits for many outcomes. With AO stimuli, these same predictive variables were not significantly correlated with outcomes. No predictive variables were correlated with bilateral benefits for sentence recognition in either modality. Conclusions Hearing aid users can expect significant bilateral hearing aid advantages for ecologically relevant, complex laboratory tests. Although future confirmatory work is necessary, these data indicate the presence of vision strengthens the relationship between bilateral benefits and degree of hearing loss.


2012 ◽  
Vol 23 (08) ◽  
pp. 577-589 ◽  
Author(s):  
Mary Rudner ◽  
Thomas Lunner ◽  
Thomas Behrens ◽  
Elisabet Sundewall Thorén ◽  
Jerker Rönnberg

Background: Recently there has been interest in using subjective ratings as a measure of perceived effort during speech recognition in noise. Perceived effort may be an indicator of cognitive load. Thus, subjective effort ratings during speech recognition in noise may covary both with signal-to-noise ratio (SNR) and individual cognitive capacity. Purpose: The present study investigated the relation between subjective ratings of the effort involved in listening to speech in noise, speech recognition performance, and individual working memory (WM) capacity in hearing impaired hearing aid users. Research Design: In two experiments, participants with hearing loss rated perceived effort during aided speech perception in noise. Noise type and SNR were manipulated in both experiments, and in the second experiment hearing aid compression release settings were also manipulated. Speech recognition performance was measured along with WM capacity. Study Sample: There were 46 participants in all with bilateral mild to moderate sloping hearing loss. In Experiment 1 there were 16 native Danish speakers (eight women and eight men) with a mean age of 63.5 yr (SD = 12.1) and average pure tone (PT) threshold of 47. 6 dB (SD = 9.8). In Experiment 2 there were 30 native Swedish speakers (19 women and 11 men) with a mean age of 70 yr (SD = 7.8) and average PT threshold of 45.8 dB (SD = 6.6). Data Collection and Analysis: A visual analog scale (VAS) was used for effort rating in both experiments. In Experiment 1, effort was rated at individually adapted SNRs while in Experiment 2 it was rated at fixed SNRs. Speech recognition in noise performance was measured using adaptive procedures in both experiments with Dantale II sentences in Experiment 1 and Hagerman sentences in Experiment 2. WM capacity was measured using a letter-monitoring task in Experiment 1 and the reading span task in Experiment 2. Results: In both experiments, there was a strong and significant relation between rated effort and SNR that was independent of individual WM capacity, whereas the relation between rated effort and noise type seemed to be influenced by individual WM capacity. Experiment 2 showed that hearing aid compression setting influenced rated effort. Conclusions: Subjective ratings of the effort involved in speech recognition in noise reflect SNRs, and individual cognitive capacity seems to influence relative rating of noise type.


Revista CEFAC ◽  
2019 ◽  
Vol 21 (1) ◽  
Author(s):  
Lidiéli Dalla Costa ◽  
Sinéia Neujahr dos Santos ◽  
Maristela Julio Costa

ABSTRACT Purpose: to investigate speech recognition in silence and in noise in subjects with unilateral hearing loss with and without hearing aids, and to analyze the benefit, self-perception of functional performance, satisfaction and the use of hearing aids in these subjects. Methods: eleven adults with unilateral, mixed and sensorineural, mild to severe hearing loss participated in this study. Speech recognition was evaluated by the Brazilian Portuguese sentences lists test; functional performance of the hearing was assessed by using the Speech Spatial and Qualities of Hearing Scale questionnaire; satisfaction was assessed by the Satisfaction with Amplification in Daily Life questionnaire, both in Brazilian Portuguese; and to assess the use of hearing aids, the patient's report was analyzed. Results: the adaptation of hearing aids provided benefits in speech recognition in all positions evaluated, both in silence and in noise. The subjects did not report major limitations in communication activities with the use of hearing aids. They were satisfied with the use of sound amplification. Most of the subjects did not use hearing aids, effectively. The discontinuity of hearing aids use can be justified by the difficulty on perceiving participation’s restriction caused by hearing loss, as well as the benefit of the hearing aid, besides the concern with batteries’ costs and aesthetic aspects. Conclusion: although showing benefits in speech recognition, in silence and in noise, and satisfaction with sound amplification, most subjects with unilateral hearing loss do not effectively use hearing aids.


Author(s):  
Jace Wolfe ◽  
Mila Duke ◽  
Sharon Miller ◽  
Erin Schafer ◽  
Christine Jones ◽  
...  

Background: For children with hearing loss, the primary goal of hearing aids is to provide improved access to the auditory environment within the limits of hearing aid technology and the child’s auditory abilities. However, there are limited data examining aided speech recognition at very low (40 dBA) and low (50 dBA) presentation levels. Purpose: Due to the paucity of studies exploring aided speech recognition at low presentation levels for children with hearing loss, the present study aimed to 1) compare aided speech recognition at different presentation levels between groups of children with normal hearing and hearing loss, 2) explore the effects of aided pure tone average (PTA) and aided Speech Intelligibility Index (SII) on aided speech recognition at low presentation levels for children with hearing loss ranging in degree from mild to severe, and 3) evaluate the effect of increasing low-level gain on aided speech recognition of children with hearing loss. Research Design: In phase 1 of this study, a two-group, repeated-measures design was used to evaluate differences in speech recognition. In phase 2 of this study, a single-group, repeated-measures design was used to evaluate the potential benefit of additional low-level hearing aid gain for low-level aided speech recognition of children with hearing loss. Study Sample: The first phase of the study included 27 school-age children with mild to severe sensorineural hearing loss and 12 school-age children with normal hearing. The second phase included eight children with mild to moderate sensorineural hearing loss. Intervention: Prior to the study, children with hearing loss were fitted binaurally with digital hearing aids. Children in the second phase were fitted binaurally with digital study hearing aids and completed a trial period with two different gain settings: 1) gain required to match hearing aid output to prescriptive targets (i.e., primary program), and 2) a 6-dB increase in overall gain for low-level inputs relative to the primary program. In both phases of this study, real-ear verification measures were completed to ensure the hearing aid output matched prescriptive targets. Data Collection and Analysis: Phase 1 included monosyllabic word recognition and syllable-final plural recognition at three presentation levels (40, 50, and 60 dBA). Phase 2 compared speech recognition performance for the same test measures and presentation levels with two differing gain prescriptions. Results and Conclusions: In phase 1 of the study, aided speech recognition was significantly poorer in children with hearing loss at all presentation levels. Higher aided SII in the better ear (55 dB SPL input) was associated with higher CNC word recognition at a 40 dBA presentation level. In phase 2, increasing the hearing aid gain for low-level inputs provided a significant improvement in syllable-final plural recognition at very low-level inputs and resulted in a non-significant trend toward better monosyllabic word recognition at very low presentation levels. Additional research is needed to document the speech recognition difficulties children with hearing aids may experience with low-level speech in the real world as well as the potential benefit or detriment of providing additional low-level hearing aid gain


Author(s):  
Snandan Sharma ◽  
Waldo Nogueira ◽  
A. John van Opstal ◽  
Josef Chalupper ◽  
Lucas H. M. Mens ◽  
...  

Purpose Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. Method Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). Results Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. Conclusions We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485


2020 ◽  
Vol 31 (04) ◽  
pp. 271-276
Author(s):  
Grant King ◽  
Nicole E. Corbin ◽  
Lori J. Leibold ◽  
Emily Buss

Abstract Background Speech recognition in complex multisource environments is challenging, particularly for listeners with hearing loss. One source of difficulty is the reduced ability of listeners with hearing loss to benefit from spatial separation of the target and masker, an effect called spatial release from masking (SRM). Despite the prevalence of complex multisource environments in everyday life, SRM is not routinely evaluated in the audiology clinic. Purpose The purpose of this study was to demonstrate the feasibility of assessing SRM in adults using widely available tests of speech-in-speech recognition that can be conducted using standard clinical equipment. Research Design Participants were 22 young adults with normal hearing. The task was masked sentence recognition, using each of five clinically available corpora with speech maskers. The target always sounded like it originated from directly in front of the listener, and the masker either sounded like it originated from the front (colocated with the target) or from the side (separated from the target). In the real spatial manipulation conditions, source location was manipulated by routing the target and masker to either a single speaker or to two speakers: one directly in front of the participant, and one mounted in an adjacent corner, 90° to the right. In the perceived spatial separation conditions, the target and masker were presented from both speakers with delays that made them sound as if they were either colocated or separated. Results With real spatial manipulations, the mean SRM ranged from 7.1 to 11.4 dB, depending on the speech corpus. With perceived spatial manipulations, the mean SRM ranged from 1.8 to 3.1 dB. Whereas real separation improves the signal-to-noise ratio in the ear contralateral to the masker, SRM in the perceived spatial separation conditions is based solely on interaural timing cues. Conclusions The finding of robust SRM with widely available speech corpora supports the feasibility of measuring this important aspect of hearing in the audiology clinic. The finding of a small but significant SRM in the perceived spatial separation conditions suggests that modified materials could be used to evaluate the use of interaural timing cues specifically.


Sign in / Sign up

Export Citation Format

Share Document