Masking of Speech in Young and Elderly Listeners With Hearing Loss

1994 ◽  
Vol 37 (3) ◽  
pp. 655-661 ◽  
Author(s):  
Pamela E. Souza ◽  
Christopher W. Turner

This study examined the contributions of various properties of background noise to the speech recognition difficulties experienced by young and elderly listeners with hearing loss. Three groups of subjects participated: young listeners with normal hearing, young listeners with sensorineural hearing loss, and elderly listeners with sensorineural hearing loss. Sensitivity thresholds up to 4000 Hz of the young and elderly groups of listeners with hearing loss were closely matched, and a high-pass masking noise was added to minimize the contributions of high-frequency (above 4000 Hz) thresholds, which were not closely matched. Speech recognition scores for monosyllables were obtained in the high-pass noise alone and in three noise backgrounds. The latter consisted of high-pass noise plus one of three maskers: speechspectrum noise, speech-spectrum noise temporally modulated by the envelope of multi-talker babble, and multi-talker babble. For all conditions, the groups with hearing impairment consistently scored lower than the group with normal hearing. Although there was a trend toward poorer speech-recognition scores as the masker condition more closely resembled the speech babble, the effect of masker condition was not statistically significant. There was no interaction between group and condition, implying that listeners with normal hearing and listeners with hearing loss are affected similarly by the type of background noise when the long-term spectrum of the masker is held constant. A significant effect of age was not observed. In addition, masked thresholds for pure tones in the presence of the speech-spectrum masker were not different for the young and elderly listeners with hearing loss. These results suggest that, for both steady-state and modulated background noises, difficulties in speech recognition for monosyllables are due primarily, and perhaps exclusively, to the presence of sensorineural hearing loss itself, and not to age-specific factors.

1978 ◽  
Vol 21 (1) ◽  
pp. 5-36 ◽  
Author(s):  
Marilyn D. Wang ◽  
Charlotte M. Reed ◽  
Robert C. Bilger

It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six low-pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in a sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of the individual listener’s audiogram is given.


1970 ◽  
Vol 13 (2) ◽  
pp. 426-437 ◽  
Author(s):  
Ellen S. Martin ◽  
J. M. Pickett

Pure-tone auditory thresholds were obtained in quiet and in three levels of masking noise for one normal-hearing group and five groups of subjects with different degrees of sensorineural loss. The masker was a low-pass noise, cut off at 250 Hz. It was presented at overall levels of 77, 97, and 107 dB SPL. Pure-tone thresholds were obtained at test frequencies within and above the masking band. A measure of noise rejection slope was used to describe spread of masking. Degree of loss, configuration of loss, and level of masking noise appear to have marked influences on upward spread of masking patterns in sensorineural subjects.


1992 ◽  
Vol 35 (6) ◽  
pp. 1410-1421 ◽  
Author(s):  
Gail A. Takahashi ◽  
Sid P. Bacon

Temporal processing of suprathreshold sounds was examined in a group of young normalhearing subjects (mean age of 26.0 years), and in three groups of older subjects (mean ages of 54.3, 64.8, and 72.2 years) with normal hearing or mild sensorineural hearing loss. Three experiments were performed. In the first experiment (modulation detection), subjects were asked to detect sinusoidal amplitude modulation (SAM) of a broadband noise, for modulation frequencies ranging from 2–1024 Hz. In the second experiment (modulation masking), the task was to detect a SAM signal (modulation frequency of 8 Hz) in the presence of a 100%-modulated SAM masker. Masker modulation frequency ranged from 2–64 Hz. In the final experiment, speech understanding was measured as a function of signal-to-noise ratio in both an unmodulated background noise and in a SAM background noise that had a modulation frequency of 8 Hz and a modulation depth of 100%. Except for a very modest correlation between age and modulation detection sensitivity at low modulation frequencies, there were no significant effects of age once the effect of hearing loss was taken into account. The results of the experiments suggest, however, that subjects with even a mild sensorineural hearing loss may have difficulty with a modulation masking task, and may not understand speech as well as normal-hearing subjects do in a modulated noise background.


1990 ◽  
Vol 33 (4) ◽  
pp. 726-735 ◽  
Author(s):  
Larry E. Humes ◽  
Lisa Roberts

The role that sensorineural hearing loss plays in the speech-recognition difficulties of the hearing-impaired elderly is examined. One approach to this issue was to make between-group comparisons of performance for three groups of subjects: (a) young normal-hearing adults; (b) elderly hearing-impaired adults; and (c) young normal-hearing adults with simulated sensorineural hearing loss equivalent to that of the elderly subjects produced by a spectrally shaped masking noise. Another approach to this issue employed correlational analyses to examine the relation between audibility and speech recognition within the group of elderly hearing-impaired subjects. An additional approach was pursued in which an acoustical index incorporating adjustments for threshold elevation was used to examine the role audibility played in the speech-recognition performance of the hearing-impaired elderly. A wide range of listening conditions was sampled in this experiment. The conclusion was that the primary determiner of speech-recognition performance in the elderly hearing-impaired subjects was their threshold elevation.


2019 ◽  
Vol 62 (3) ◽  
pp. 758-767 ◽  
Author(s):  
Raymond L. Goldsworthy ◽  
Kali L. Markle

Purpose Speech recognition deteriorates with hearing loss, particularly in fluctuating background noise. This study examined how hearing loss affects speech recognition in different types of noise to clarify how characteristics of the noise interact with the benefits listeners receive when listening in fluctuating compared to steady-state noise. Method Speech reception thresholds were measured for a closed set of spondee words in children (ages 5–17 years) in quiet, speech-spectrum noise, 2-talker babble, and instrumental music. Twenty children with normal hearing and 43 children with hearing loss participated; children with hearing loss were subdivided into groups with cochlear implant (18 children) and hearing aid (25 children) groups. A cohort of adults with normal hearing was included for comparison. Results Hearing loss had a large effect on speech recognition for each condition, but the effect of hearing loss was largest in 2-talker babble and smallest in speech-spectrum noise. Children with normal hearing had better speech recognition in 2-talker babble than in speech-spectrum noise, whereas children with hearing loss had worse recognition in 2-talker babble than in speech-spectrum noise. Almost all subjects had better speech recognition in instrumental music compared to speech-spectrum noise, but with less of a difference observed for children with hearing loss. Conclusions Speech recognition is more sensitive to the effects of hearing loss when measured in fluctuating compared to steady-state noise. Speech recognition measured in fluctuating noise depends on an interaction of hearing loss with characteristics of the background noise; specifically, children with hearing loss were able to derive a substantial benefit for listening in fluctuating noise when measured in instrumental music compared to 2-talker babble.


Sign in / Sign up

Export Citation Format

Share Document