An analytical method to convert between speech recognition thresholds and percentage-correct scores for speech-in-noise tests

2021 ◽  
Vol 150 (2) ◽  
pp. 1321-1331
Author(s):  
Cas Smits ◽  
Karina C. De Sousa ◽  
De Wet Swanepoel
2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


Author(s):  
Bruna S. Mussoi

Purpose Music training has been proposed as a possible tool for auditory training in older adults, as it may improve both auditory and cognitive skills. However, the evidence to support such benefits is mixed. The goal of this study was to determine the differential effects of lifelong musical training and working memory on speech recognition in noise, in older adults. Method A total of 31 musicians and nonmusicians aged 65–78 years took part in this cross-sectional study. Participants had a normal pure-tone average, with most having high-frequency hearing loss. Working memory (memory capacity) was assessed with the backward Digit Span test, and speech recognition in noise was assessed with three clinical tests (Quick Speech in Noise, Hearing in Noise Test, and Revised Speech Perception in Noise). Results Findings from this sample of older adults indicate that neither music training nor working memory was associated with differences on the speech recognition in noise measures used in this study. Similarly, duration of music training was not associated with speech-in-noise recognition. Conclusions Results from this study do not support the hypothesis that lifelong music training benefits speech recognition in noise. Similarly, an effect of working memory (memory capacity) was not apparent. While these findings may be related to the relatively small sample size, results across previous studies that investigated these effects have also been mixed. Prospective randomized music training studies may be able to better control for variability in outcomes associated with pre-existing and music training factors, as well as to examine the differential impact of music training and working memory for speech-in-noise recognition in older adults.


2020 ◽  
Vol 24 ◽  
pp. 233121652097563
Author(s):  
Christopher F. Hauth ◽  
Simon C. Berning ◽  
Birger Kollmeier ◽  
Thomas Brand

The equalization cancellation model is often used to predict the binaural masking level difference. Previously its application to speech in noise has required separate knowledge about the speech and noise signals to maximize the signal-to-noise ratio (SNR). Here, a novel, blind equalization cancellation model is introduced that can use the mixed signals. This approach does not require any assumptions about particular sound source directions. It uses different strategies for positive and negative SNRs, with the switching between the two steered by a blind decision stage utilizing modulation cues. The output of the model is a single-channel signal with enhanced SNR, which we analyzed using the speech intelligibility index to compare speech intelligibility predictions. In a first experiment, the model was tested on experimental data obtained in a scenario with spatially separated target and masker signals. Predicted speech recognition thresholds were in good agreement with measured speech recognition thresholds with a root mean square error less than 1 dB. A second experiment investigated signals at positive SNRs, which was achieved using time compressed and low-pass filtered speech. The results demonstrated that binaural unmasking of speech occurs at positive SNRs and that the modulation-based switching strategy can predict the experimental results.


2019 ◽  
Vol 30 (04) ◽  
pp. 315-326 ◽  
Author(s):  
Jumana Harianawala ◽  
Jason Galster ◽  
Benjamin Hornsby

AbstractThe hearing in noise test (HINT) is the most popular adaptive test used to evaluate speech in noise performance, especially in context of hearing aid features. However, the number of conditions that can be tested on the HINT is limited by a small speech corpus. The American English Matrix test (AEMT) is a new alternative adaptive speech in noise test with a larger speech corpus. The study examined the relationships between the performance of hearing aid wearers on the HINT and the AEMT.To examine whether there was a difference in performance of hearing aid wearers on the HINT and the AEMT. A secondary purpose, given the AEMT’s steep performance-intensity function, was to determine whether the AEMT is more sensitive to changes in speech recognition resulting from directional (DIR) microphone processing in hearing aids.A repeated measures design was used in this study. Multiple measurements were made on each subject. Each measurement involved a different experimental condition.Ten adults with hearing loss participated in this study.All participants completed the AEMT and HINT, using adaptive and fixed test formats while wearing hearing aids. Speech recognition was assessed in two hearing aid microphone settings—omnidirectional and fixed DIR. All testing was conducted via sound field presentation. Performance on HINT and AEMT were systematically compared across all test conditions using a linear model with repeated measures.The results of this study revealed that adult hearing aid users perform differently on the HINT and AEMT, with adaptive AEMT testing yielding significantly better (more negative) thresholds than the HINT. Slopes of performance intensity functions obtained by testing at multiple fixed signal-to-noise ratios, revealed a somewhat steeper slope for the HINT compared with the AEMT. Despite this steeper slope, the benefit provided by DIR microphones was not significantly different between the two speech tests.The observation of similar DIR benefits of the HINT and AEMT suggests that the HINT and AEMT are equally sensitive to changes in speech recognition thresholds following intervention. Therefore, the decision to use the AEMT or the HINT will depend on the purpose of the study and/or the technology being investigated. Other test related factors such as available sentence corpus, learning effects and test time will also influence test selection.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Chen ◽  
Zhe Wang ◽  
Ruijuan Dong ◽  
Xinxing Fu ◽  
Yuan Wang ◽  
...  

Objective: This study was aimed at evaluating improvements in speech-in-noise recognition ability as measured by signal-to-noise ratio (SNR) with the use of wireless remote microphone technology. These microphones transmit digital signals via radio frequency directly to hearing aids and may be a valuable assistive listening device for the hearing-impaired population of Mandarin speakers in China.Methods: Twenty-three adults (aged 19–80 years old) and fourteen children (aged 8–17 years old) with bilateral sensorineural hearing loss were recruited. The Mandarin Hearing in Noise Test was used to test speech recognition ability in adult subjects, and the Mandarin Hearing in Noise Test for Children was used for children. The subjects’ perceived SNR was measured using sentence recognition ability at three different listening distances of 1.5, 3, and 6 m. At each distance, SNR was obtained under three device settings: hearing aid microphone alone, wireless remote microphone alone, and hearing aid microphone and wireless remote microphone simultaneously.Results: At each test distance, for both adult and pediatric groups, speech-in-noise recognition thresholds were significantly lower with the use of the wireless remote microphone in comparison with the hearing aid microphones alone (P < 0.05), indicating better SNR performance with the wireless remote microphone. Moreover, when the wireless remote microphone was used, test distance had no effect on speech-in-noise recognition for either adults or children.Conclusion: Wireless remote microphone technology can significantly improve speech recognition performance in challenging listening environments for Mandarin speaking hearing aid users in China.


2020 ◽  
Vol 63 (12) ◽  
pp. 4265-4276
Author(s):  
Lauren Calandruccio ◽  
Heather L. Porter ◽  
Lori J. Leibold ◽  
Emily Buss

Purpose Talkers often modify their speech when communicating with individuals who struggle to understand speech, such as listeners with hearing loss. This study evaluated the benefit of clear speech in school-age children and adults with normal hearing for speech-in-noise and speech-in-speech recognition. Method Masked sentence recognition thresholds were estimated for school-age children and adults using an adaptive procedure. In Experiment 1, the target and masker were summed and presented over a loudspeaker located directly in front of the listener. The masker was either speech-shaped noise or two-talker speech, and target sentences were produced using a clear or conversational speaking style. In Experiment 2, stimuli were presented over headphones. The two-talker speech masker was diotic (M 0 ). Clear and conversational target sentences were presented either in-phase (T 0 ) or out-of-phase (T π ) between the two ears. The M 0 T π condition introduces a segregation cue that was expected to improve performance. Results For speech presented over a single loudspeaker (Experiment 1), the clear-speech benefit was independent of age for the noise masker, but it increased with age for the two-talker masker. Similar age effects for the two-talker speech masker were seen under headphones with diotic presentation (M 0 T 0 ), but comparable clear-speech benefit as a function of age was observed with a binaural cue to facilitate segregation (M 0 T π ). Conclusions Consistent with prior research, children showed a robust clear-speech benefit for speech-in-noise recognition. Immaturity in the ability to segregate target from masker speech may limit young children's ability to benefit from clear-speech modifications for speech-in-speech recognition under some conditions. When provided with a cue that facilitates segregation, children as young as 4–7 years of age derived a clear-speech benefit in a two-talker masker that was similar to the benefit experienced by adults.


2020 ◽  
Vol 63 (8) ◽  
pp. 2789-2800
Author(s):  
Christina M. Roup ◽  
Donna E. Green ◽  
J. Riley DeBacker

Purpose This study assessed state anxiety as a function of speech recognition testing using three clinical measures of speech in noise and one clinical measure of dichotic speech recognition. Method Thirty young adults, 30 middle-age adults, and 25 older adults participated. State anxiety was measured pre– and post–speech recognition testing using the State–Trait Anxiety Inventory. Speech recognition was measured with the Revised Speech Perception in Noise Test, the Quick Speech-in-Noise Test, the Words-in-Noise Test, and the Dichotic Digits Test (DDT). Results Speech recognition performance was as expected: Older adults performed significantly poorer on all measures as compared to the young adults and significantly poorer on the Revised Speech Perception in Noise Test, the Quick Speech-in-Noise Test, and the Words-in-Noise Test as compared to the middle-age adults. On average, State–Trait Anxiety Inventory scores increased posttesting, with the middle-age adults exhibiting significantly greater increases in state anxiety as compared to the young and older adults. Increases in state anxiety were significantly greater for the DDT relative to the speech-in-noise tests for the middle-age adults only. Poorer DDT recognition performance was associated with higher levels of state anxiety. Conclusions Increases in state anxiety were observed after speech-in-noise and dichotic listening testing for all groups, with significant increases seen for the young and middle-age adults. Although the exact mechanisms could not be determined, multiple factors likely influenced the observed increases in state anxiety, including task difficulty, individual proficiency, and age.


2021 ◽  
pp. 019459982110363
Author(s):  
Margaret E. MacPhail ◽  
Nathan T. Connell ◽  
Douglas J. Totten ◽  
Mitchell T. Gray ◽  
David Pisoni ◽  
...  

Objective To compare differences in audiologic outcomes between slim modiolar electrode (SME) CI532 and slim lateral wall electrode (SLW) CI522 cochlear implant recipients. Study Design Retrospective cohort study. Setting Tertiary academic hospital. Methods Comparison of postoperative AzBio sentence scores in quiet (percentage correct) in adult cochlear implant recipients with SME or SLW matched for preoperative AzBio sentence scores in quiet and aided and unaided pure tone average. Results Patients with SLW (n = 52) and patients with SME (n = 37) had a similar mean (SD) age (62.0 [18.2] vs 62.6 [14.6] years, respectively), mean preoperative aided pure tone average (55.9 [20.4] vs 58.1 [16.4] dB; P = .59), and mean AzBio score (percentage correct, 11.1% [13.3%] vs 8.0% [11.5%]; P = .25). At last follow-up (SLW vs SME, 9.0 [2.9] vs 9.9 [2.6] months), postoperative mean AzBio scores in quiet were not significantly different (percentage correct, 70.8% [21.3%] vs 65.6% [24.5%]; P = .29), and data log usage was similar (12.9 [4.0] vs 11.3 [4.1] hours; P = .07). In patients with preoperative AzBio <10% correct, the 6-month mean AzBio scores were significantly better with SLW than SME (percentage correct, 70.6% [22.9%] vs 53.9% [30.3%]; P = .02). The intraoperative tip rollover rate was 8% for SME and 0% for SLW. Conclusions Cochlear implantation with SLW and SME provides comparable improvement in audiologic functioning. SME does not exhibit superior speech recognition outcomes when compared with SLW.


2012 ◽  
Vol 23 (10) ◽  
pp. 779-788 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Laurel M. Fisher

Background: Speech recognition in noise testing has been conducted at least since the 1940s (Dickson et al, 1946). The ability to recognize speech in noise is a distinct function of the auditory system (Plomp, 1978). According to Kochkin (2002), difficulty recognizing speech in noise is the primary complaint of hearing aid users. However, speech recognition in noise testing has not found widespread use in the field of audiology (Mueller, 2003; Strom, 2003; Tannenbaum and Rosenfeld, 1996). The audiogram has been used as the “gold standard” for hearing ability. However, the audiogram is a poor indicator of speech recognition in noise ability. Purpose: This study investigates the relationship between pure-tone thresholds, the articulation index, and the ability to recognize speech in quiet and in noise. Research Design: Pure-tone thresholds were measured for audiometric frequencies 250–6000 Hz. Pure-tone threshold groups were created. These included a normal threshold group and slight, mild, severe, and profound high-frequency pure-tone threshold groups. Speech recognition thresholds in quiet and in noise were obtained using the Hearing in Noise Test (HINT) (Nilsson et al, 1994; Vermiglio, 2008). The articulation index was determined by using Pavlovic's method with pure-tone thresholds (Pavlovic, 1989, 1991). Study Sample: Two hundred seventy-eight participants were tested. All participants were native speakers of American English. Sixty-three of the original participants were removed in order to create groups of participants with normal low-frequency pure-tone thresholds and relatively symmetrical high-frequency pure-tone threshold groups. The final set of 215 participants had a mean age of 33 yr with a range of 17–59 yr. Data Collection and Analysis: Pure-tone threshold data were collected using the Hughson-Weslake procedure. Speech recognition data were collected using a Windows-based HINT software system. Statistical analyses were conducted using descriptive, correlational, and multivariate analysis of covariance (MANCOVA) statistics. Results: The MANCOVA analysis (where the effect of age was statistically removed) indicated that there were no significant differences in HINT performances between groups of participants with normal audiograms and those groups with slight, mild, moderate, or severe high-frequency hearing losses. With all of the data combined across groups, correlational analyses revealed significant correlations between pure-tone averages and speech recognition in quiet performance. Nonsignificant or significant but weak correlations were found between pure-tone averages and HINT thresholds. Conclusions: The ability to recognize speech in steady-state noise cannot be predicted from the audiogram. A new classification scheme of hearing impairment based on the audiogram and the speech reception in noise thresholds, as measured with the HINT, may be useful for the characterization of the hearing ability in the global sense. This classification scheme is consistent with Plomp's two aspects of hearing ability (Plomp, 1978).


Sign in / Sign up

Export Citation Format

Share Document