Simulation of the effect of threshold elevation and loudness recruitment combined with reduced frequency selectivity on the intelligibility of speech in noise

1997 ◽  
Vol 102 (1) ◽  
pp. 603-615 ◽  
Author(s):  
Yoshito Nejime ◽  
Brian C. J. Moore
2013 ◽  
Vol 24 (04) ◽  
pp. 258-273 ◽  
Author(s):  
Ken W. Grant ◽  
Therese C. Walden

Background: Traditional audiometric measures, such as pure-tone thresholds or unaided word-recognition in quiet, appear to be of marginal use in predicting speech understanding by hearing-impaired (HI) individuals in background noise with or without amplification. Suprathreshold measures of auditory function (tolerance of noise, temporal and frequency resolution) appear to contribute more to success with amplification and may describe more effectively the distortion component of hearing. However, these measures are not typically measured clinically. When combined with measures of audibility, suprathreshold measures of auditory distortion may provide a much more complete understanding of speech deficits in noise by HI individuals. Purpose: The primary goal of this study was to investigate the relationship among measures of speech recognition in noise, frequency selectivity, temporal acuity, modulation masking release, and informational masking in adult and elderly patients with sensorineural hearing loss to determine whether peripheral distortion for suprathreshold sounds contributes to the varied outcomes experienced by patients with sensorineural hearing loss listening to speech in noise. Research Design: A correlational study. Study Sample: Twenty-seven patients with sensorineural hearing loss and four adults with normal hearing were enrolled in the study. Data Collection and Analysis: The data were collected in a sound attenuated test booth. For speech testing, subjects' verbal responses were scored by the experimenter and entered into a custom computer program. For frequency selectivity and temporal acuity measures, subject responses were recorded via a touch screen. Simple correlation, step-wise multiple linear regression analyses and a repeated analysis of variance were performed. Results: Results showed that the signal-to-noise ratio (SNR) loss could only be partially predicted by a listener's thresholds or audibility measures such as the Speech Intelligibility Index (SII). Correlations between SII and SNR loss were higher using the Hearing-in-Noise Test (HINT) than the Quick Speech-in-Noise test (QSIN) with the SII accounting for 71% of the variance in SNR loss for the HINT but only 49% for the QSIN. However, listener age and the addition of suprathreshold measures improved the prediction of SNR loss using the QSIN, accounting for nearly 71% of the variance. Conclusions: Two standard clinical speech-in-noise tests, QSIN and HINT, were used in this study to obtain a measure of SNR loss. When administered clinically, the QSIN appears to be less redundant with hearing thresholds than the HINT and is a better indicator of a patient's suprathreshold deficit and its impact on understanding speech in noise. Additional factors related to aging, spectral resolution, and, to a lesser extent, temporal resolution improved the ability to predict SNR loss measured with the QSIN. For the HINT, a listener's audibility and age were the only two significant factors. For both QSIN and HINT, roughly 25–30% of the variance in individual differences in SNR loss (i.e., the dB difference in SNR between an individual HI listener and a control group of NH listeners at a specified performance level, usually 50% word or sentence recognition) remained unexplained, suggesting the need for additional measures of suprathreshold acuity (e.g., sensitivity to temporal fine structure) or cognitive function (e.g., memory and attention) to further improve the ability to understand individual variability in SNR loss.


1993 ◽  
Vol 36 (2) ◽  
pp. 410-423 ◽  
Author(s):  
Joseph W. Hall ◽  
John H. Grose ◽  
Brian C. J. Moore

Experiments 1 and 2 investigated the effect of frequency selectivity on comodulation masking release (CMR) in normal-hearing subjects, examining conditions where frequency selectivity was relatively good (low masker level at both low [500-Hz] and high [2500-Hz] signal frequency, and high masker level at low signal frequency) and where frequency selectivity was somewhat degraded (high masker level and high signal frequency). The first experiment investigated CMR in conditions where a narrow modulated noise band was centered on the signal frequency, and a wider comodulated noise band was located below the band centered on the signal frequency. Signal frequencies were 500 and 2000 Hz. The masker level and the frequency separation between the on-signal and comodulated flanking band were varied. In addition to conditions where the flanking band and on-signal band were presented at the same spectrum level, conditions were included where the spectrum level of the flanking band was 10-dB higher than that of the on-signal band, in order to accentuate effects of reduced frequency selectivity. Results indicated that CMR was reduced at the 2000-Hz region when masker level was high, when the frequency separation between on-signal and flanking band was small, and when a 10-dB level disparity existed between the on-signal and flanking band. In the second experiment, CMR was investigated for narrow comodulated noise bands, presented either without any additional sound or in the presence of a random noise background. CMR increased slightly as the masker level increased, except at 2500 Hz when the noise background was present. The decrease in CMR at 2500 Hz with the high masker level and with a noise background present could be explained in terms of reduced frequency selectivity. In a third experiment, we compared performance for equal absolute bandwidth maskers at a low (500-Hz) and a high (2000-Hz) stimulus frequency. Results here suggested that detection in modulated noise may be reduced due to a reduction in the number of quasi-independent auditory filters contributing temporal envelope information. The effects found in the present study using normal-hearing listeners under conditions of degraded frequency selectivity may be useful in understanding part of the reduction of CMR that occurs in cochlear-impaired listeners having reduced frequency selectivity.


2011 ◽  
Vol 60 (6) ◽  
pp. 1196-1203 ◽  
Author(s):  
Hong-Sub An ◽  
Gyu-Seok Park ◽  
Yu-Yong Jeon ◽  
Young-Rok Song ◽  
Sang-Min Lee

1992 ◽  
Vol 91 (6) ◽  
pp. 3402-3423 ◽  
Author(s):  
Brian C. J. Moore ◽  
Brian R. Glasberg ◽  
Andrew Simpson

1980 ◽  
Vol 23 (3) ◽  
pp. 646-669 ◽  
Author(s):  
Mary Florentine ◽  
Søren Buus ◽  
Bertram Scharf ◽  
Eberhard Zwicker

This study compares frequency selectivity—as measured by four different methods—in observers with normal hearing and in observers with conductive (non-otosclerotic), otosclerotic, noise-induced, or degenerative hearing losses. Each category of loss was represented by a group of 7 to 10 observers, who were tested at center frequencies of 500 Hz and 4000 Hz. For each group, the following four measurements were made: psychoacoustical tuning curves, narrow-band masking, two-tone masking, and loudness summation. Results showed that (a) frequency selectivity was reduced at frequencies where a cochlear hearing loss was present, (b) frequency selectivity was reduced regardless of the test level at which normally-hearing observers and observers with cochlear impairment were compared, (c) all four measures of frequency selectivity were significantly correlated and (d) reduced frequency selectivity was positively correlated with the amount of cochlear hearing loss.


Sign in / Sign up

Export Citation Format

Share Document