Recognition of Speech in Noise with New Hearing Instrument Compression Release Settings Requires Explicit Cognitive Storage and Processing Capacity

2007 ◽  
Vol 18 (07) ◽  
pp. 618-631 ◽  
Author(s):  
Catharina Foo ◽  
Mary Rudner ◽  
Jerker Rönnberg ◽  
Thomas Lunner

Evidence suggests that cognitive capacity predicts the ability to benefit from specific compression release settings in non-linear digital hearing instruments. Previous studies have investigated the predictive value of various cognitive tests in relation to aided speech recognition in noise using compression release settings that have been experienced for a certain period. However, the predictive value of cognitive tests with new settings, to which the user has not had the opportunity to become accustomed, has not been studied. In the present study, we compare the predictive values of two cognitive tests, reading span and letter monitoring, in relation to aided speech recognition in noise for 32 habitual hearing instrument users using new compression release settings. We found that reading span was a strong predictor of speech recognition in noise with new compression release settings. This result generalizes previous findings for experienced test settings to new test settings, for both speech recognition in noise tests used in the present study, Hagerman sentences and HINT. Letter monitoring, on the other hand, was not found to be a strong predictor of speech recognition in noise with new compression release settings. La evidencia sugiere que la capacidad cognitiva predice la habilidad de beneficiarse de ajustes específicos de liberación de la compresión en instrumentos auditivos digitales no lineales. Estudios previos han investigado el valor de predicción de varias pruebas cognitivas en relación con el reconocimiento amplificado del lenguaje en medio de ruido utilizando ajustes de liberación de la compresión que han sido experimentados por un cierto período de tiempo. Sin embargo, el valor de predicción de la pruebas cognitivas con los nuevos ajustes, donde le usuario no ha tenido la oportunidad de acostumbrarse, no ha sido estudiado. En el presente estudio, comparamos el valor de predicción de dos pruebas cognitivas, lapso de lectura y monitoreo de letras, en relación con el reconocimiento amplificado del lenguaje en medio de ruido, para 32 usuarios habituales de dispositivos auditivos, usando nuevos ajustes de liberación de la compresión. Este resultado generaliza hallazgos previos para ajustes de prueba en sujetos con experiencia, pasados a nuevos ajustes, tanto para pruebas de reconocimiento de lenguaje en medio de ruido usadas en este estudio, como para frases de Hagerman y HINT. El monitoreo de letras, por otro lado, no se ha visto que sea un fuerte elemento de predicción para el reconocimiento del lenguaje en medio de ruido con nuevos ajustes de liberación de la compresión.

2012 ◽  
Vol 23 (08) ◽  
pp. 577-589 ◽  
Author(s):  
Mary Rudner ◽  
Thomas Lunner ◽  
Thomas Behrens ◽  
Elisabet Sundewall Thorén ◽  
Jerker Rönnberg

Background: Recently there has been interest in using subjective ratings as a measure of perceived effort during speech recognition in noise. Perceived effort may be an indicator of cognitive load. Thus, subjective effort ratings during speech recognition in noise may covary both with signal-to-noise ratio (SNR) and individual cognitive capacity. Purpose: The present study investigated the relation between subjective ratings of the effort involved in listening to speech in noise, speech recognition performance, and individual working memory (WM) capacity in hearing impaired hearing aid users. Research Design: In two experiments, participants with hearing loss rated perceived effort during aided speech perception in noise. Noise type and SNR were manipulated in both experiments, and in the second experiment hearing aid compression release settings were also manipulated. Speech recognition performance was measured along with WM capacity. Study Sample: There were 46 participants in all with bilateral mild to moderate sloping hearing loss. In Experiment 1 there were 16 native Danish speakers (eight women and eight men) with a mean age of 63.5 yr (SD = 12.1) and average pure tone (PT) threshold of 47. 6 dB (SD = 9.8). In Experiment 2 there were 30 native Swedish speakers (19 women and 11 men) with a mean age of 70 yr (SD = 7.8) and average PT threshold of 45.8 dB (SD = 6.6). Data Collection and Analysis: A visual analog scale (VAS) was used for effort rating in both experiments. In Experiment 1, effort was rated at individually adapted SNRs while in Experiment 2 it was rated at fixed SNRs. Speech recognition in noise performance was measured using adaptive procedures in both experiments with Dantale II sentences in Experiment 1 and Hagerman sentences in Experiment 2. WM capacity was measured using a letter-monitoring task in Experiment 1 and the reading span task in Experiment 2. Results: In both experiments, there was a strong and significant relation between rated effort and SNR that was independent of individual WM capacity, whereas the relation between rated effort and noise type seemed to be influenced by individual WM capacity. Experiment 2 showed that hearing aid compression setting influenced rated effort. Conclusions: Subjective ratings of the effort involved in speech recognition in noise reflect SNRs, and individual cognitive capacity seems to influence relative rating of noise type.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


Author(s):  
Bruna S. Mussoi

Purpose Music training has been proposed as a possible tool for auditory training in older adults, as it may improve both auditory and cognitive skills. However, the evidence to support such benefits is mixed. The goal of this study was to determine the differential effects of lifelong musical training and working memory on speech recognition in noise, in older adults. Method A total of 31 musicians and nonmusicians aged 65–78 years took part in this cross-sectional study. Participants had a normal pure-tone average, with most having high-frequency hearing loss. Working memory (memory capacity) was assessed with the backward Digit Span test, and speech recognition in noise was assessed with three clinical tests (Quick Speech in Noise, Hearing in Noise Test, and Revised Speech Perception in Noise). Results Findings from this sample of older adults indicate that neither music training nor working memory was associated with differences on the speech recognition in noise measures used in this study. Similarly, duration of music training was not associated with speech-in-noise recognition. Conclusions Results from this study do not support the hypothesis that lifelong music training benefits speech recognition in noise. Similarly, an effect of working memory (memory capacity) was not apparent. While these findings may be related to the relatively small sample size, results across previous studies that investigated these effects have also been mixed. Prospective randomized music training studies may be able to better control for variability in outcomes associated with pre-existing and music training factors, as well as to examine the differential impact of music training and working memory for speech-in-noise recognition in older adults.


2019 ◽  
Vol 90 (e7) ◽  
pp. A39.1-A39
Author(s):  
Jonathan JD Baird-Gunning ◽  
Shaun Zhai ◽  
Brett Jones ◽  
Neha Nandal ◽  
Chandi Das ◽  
...  

Introduction25%-30% of patients admitted with acute stroke are stroke mimics. Clinical assessment plays a major role in diagnosis in the hyperacute clinical setting. Identifying physical signs that correctly identify stroke is therefore important. A retrospective study1 suggested that the presence of sensory inattention (or neglect) was seen exclusively in stroke patients, suggesting that inattention might be a reliable discriminator between stroke and mimics. This study aimed to test that hypothesis.MethodsProspective assessment of suspected stroke patients for the presence of neglect (NIHSS definition). Neglect could be visual and/or somatosensory. The presence of neglect was then correlated with eventual diagnosis at 48 hours. Sensitivity, specificity and predictive values were calculated. A post-hoc analysis evaluated the correlation of neglect with large vessel occlusion in patients who underwent angiography.Results115 patients were recruited, 70 ultimately with stroke and 45 with other diagnoses. Neglect was present in 27 patients (of whom 23 had stroke) and absent in 88. This yielded: sensitivity 32.9%, specificity 91.1%, positive predictive value 85.2%, and negative predictive value 41.9%. Two patients with neglect had a diagnosis of functional illness, one a seizure, and one a brain tumour. Neglect was present in 7 out of 8 patients with large vessel occlusion (sensitivity 87.5%) and was absent in all patients who did not have large vessel occlusion on angiogram.ConclusionWhen present, neglect is a strong predictor of organic pathology and large vessel occlusion. However, it is not 100% specific and can be seen in functional presentations.ReferenceGargalas S, Weeks R, Khan-Bourne N, Shotbolt P, Simblett S, Ashraf L, Doyle C, Bancroft V, David AS: Incidence and outcome of functional stroke mimics admitted to a hyperacute stroke unit. J Neurol Neurosurg Psychiatry 2017, 88:2–6.


2012 ◽  
Vol 23 (10) ◽  
pp. 779-788 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Laurel M. Fisher

Background: Speech recognition in noise testing has been conducted at least since the 1940s (Dickson et al, 1946). The ability to recognize speech in noise is a distinct function of the auditory system (Plomp, 1978). According to Kochkin (2002), difficulty recognizing speech in noise is the primary complaint of hearing aid users. However, speech recognition in noise testing has not found widespread use in the field of audiology (Mueller, 2003; Strom, 2003; Tannenbaum and Rosenfeld, 1996). The audiogram has been used as the “gold standard” for hearing ability. However, the audiogram is a poor indicator of speech recognition in noise ability. Purpose: This study investigates the relationship between pure-tone thresholds, the articulation index, and the ability to recognize speech in quiet and in noise. Research Design: Pure-tone thresholds were measured for audiometric frequencies 250–6000 Hz. Pure-tone threshold groups were created. These included a normal threshold group and slight, mild, severe, and profound high-frequency pure-tone threshold groups. Speech recognition thresholds in quiet and in noise were obtained using the Hearing in Noise Test (HINT) (Nilsson et al, 1994; Vermiglio, 2008). The articulation index was determined by using Pavlovic's method with pure-tone thresholds (Pavlovic, 1989, 1991). Study Sample: Two hundred seventy-eight participants were tested. All participants were native speakers of American English. Sixty-three of the original participants were removed in order to create groups of participants with normal low-frequency pure-tone thresholds and relatively symmetrical high-frequency pure-tone threshold groups. The final set of 215 participants had a mean age of 33 yr with a range of 17–59 yr. Data Collection and Analysis: Pure-tone threshold data were collected using the Hughson-Weslake procedure. Speech recognition data were collected using a Windows-based HINT software system. Statistical analyses were conducted using descriptive, correlational, and multivariate analysis of covariance (MANCOVA) statistics. Results: The MANCOVA analysis (where the effect of age was statistically removed) indicated that there were no significant differences in HINT performances between groups of participants with normal audiograms and those groups with slight, mild, moderate, or severe high-frequency hearing losses. With all of the data combined across groups, correlational analyses revealed significant correlations between pure-tone averages and speech recognition in quiet performance. Nonsignificant or significant but weak correlations were found between pure-tone averages and HINT thresholds. Conclusions: The ability to recognize speech in steady-state noise cannot be predicted from the audiogram. A new classification scheme of hearing impairment based on the audiogram and the speech reception in noise thresholds, as measured with the HINT, may be useful for the characterization of the hearing ability in the global sense. This classification scheme is consistent with Plomp's two aspects of hearing ability (Plomp, 1978).


Author(s):  
Julie Beadle ◽  
Jeesun Kim ◽  
Chris Davis

Purpose: Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked. Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR). Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not. Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments. Supplemental Material https://doi.org/10.23641/asha.16879549


2015 ◽  
Vol 26 (06) ◽  
pp. 540-546 ◽  
Author(s):  
Eric Hoover ◽  
Lauren Pasquesi ◽  
Pamela Souza

Background: Temporal resolution is important for speech recognition and may contribute to variability in speech recognition among patients. Clinical tests of temporal resolution are available, but it is not clear how closely results of those tests correspond to results of traditional temporal resolution tests. Purpose: The purpose of this study was to compare the Gaps-in-Noise (GIN) test to a traditional measure of gap detection. Study Sample: This study included older adults with hearing loss and younger adults with normal hearing. Data Collection and Analysis: Participants completed one practice and two test blocks of each gap detection test, and a measure of speech-in-noise recognition. Individual data were correlated to examine the relationship between the tests. Results: The GIN and traditional gap detection were significantly, but not highly correlated. The traditional gap detection test contributed to variance in speech recognition in noise, while the GIN did not. Conclusions: The brevity and ease of implementing the GIN in the clinic make it a viable test of temporal resolution. However, it differs from traditional measures in implementation, and as a result relies on different cognitive factors. The GIN thresholds should be interpreted carefully and not presumed to represent an approximation of traditional gap detection thresholds.


2005 ◽  
Vol 16 (05) ◽  
pp. 270-277 ◽  
Author(s):  
Todd A. Ricketts ◽  
Benjamin W.Y. Hornsby

This brief report discusses the affect of digital noise reduction (DNR) processing on aided speech recognition and sound quality measures in 14 adults fitted with a commercial hearing aid. Measures of speech recognition and sound quality were obtained in two different speech-in-noise conditions (71 dBA speech, +6 dB SNR and 75 dBA speech, +1 dB SNR). The results revealed that the presence or absence of DNR processing did not impact speech recognition in noise (either positively or negatively). Paired comparisons of sound quality for the same speech in noise signals, however, revealed a strong preference for DNR processing. These data suggest that at least one implementation of DNR processing is capable of providing improved sound quality, for speech in noise, in the absence of improved speech recognition.


2002 ◽  
Vol 116 (S28) ◽  
pp. 47-51 ◽  
Author(s):  
Sunil N. Dutt ◽  
Ann-Louise McDermott ◽  
Stuart P. Burrell ◽  
Huw R. Cooper ◽  
Andrew P. Reid ◽  
...  

The Birmingham bone-anchored hearing aid (BAHA) programme, since its inception in 1988, has fitted more than 300 patients with unilateral bone-anchored hearing aids. Recently, some of the patients who benefited extremely well with unilateral aids applied for bilateral amplification. To date, 15 patients have been fitted with bilateral BAHAs. The benefits of bilateral amplification have been compared to unilateral amplification in 11 of these patients who have used their second BAHA for 12 months or longer. Following a subjective analysis in the form of comprehensive questionnaires, objective testing was undertaken to assess specific issues such as ‘speech recognition in quiet’, ‘speech recognition in noise’ and a modified ‘speech-in-simulated-party-noise’ (Plomp) test.‘Speech in quiet’ testing revealed a 100 per cent score with both unilateral and bilateral BAHAs. With ‘speech in noise’ all 11 patients scored marginally better with bilateral aids compared to best unilateral responses. The modified Plomp test demonstrated that bilateral BAHAs provided maximum flexibility when the origin of noise cannot be controlled as in day-to-day situations. In this small case series the results are positive and are comparable to the experience of the Nijmegen BAHA group.


Sign in / Sign up

Export Citation Format

Share Document