Recognition of Voiceless Fricatives by Normal and Hearing-Impaired Subjects

1990 ◽  
Vol 33 (3) ◽  
pp. 440-449 ◽  
Author(s):  
Fan-Gang Zeng ◽  
Christopher W. Turner

The purpose of this study was to investigate the sufficient perceptual cues used in the recognition of four voiceless fricative consonants [s, f, θ, ∫] followed by the same vowel [i:] in normal-hearing and hearing-impaired adult listeners. Subjects identified the four CV speech tokens in a closed-set response task across a range of presentation levels. Fricative syllables were either produced by a human speaker in the natural stimulus set, or generated by a computer program in the synthetic stimulus set. By comparing conditions in which the subjects were presented with equivalent degrees of audibility for individual fricatives, it was possible to isolate the factor of lack of audibility from that of loss of suprathreshold discriminability. Results indicate that (a) the frication burst portion may serve as a sufficient cue for correct recognition of voiceless fricatives by normal-hearing subjects, whereas the more intense CV transition portion, though it may not be necessary, can also assist these subjects to distinguish place information, particularly at low presentation levels; (b) hearing-impaired subjects achieved close-to-normal recognition performance when given equivalent degrees of audibility of the frication cue, but they obtained poorer-than-normal performance if only given equivalent degrees of audibility of the transition cue; (c) the difficulty that hearing-impaired subjects have in perceiving fricatives under normal circumstances may be due to two factors: the lack of audibility of the frication cue and the loss of discriminability of the transition cue.

1991 ◽  
Vol 34 (5) ◽  
pp. 1180-1184 ◽  
Author(s):  
Larry E. Humes ◽  
Kathleen J. Nelson ◽  
David B. Pisoni

The Modified Rhyme Test (MRT), recorded using natural speech and two forms of synthetic speech, DECtalk and Votrax, was used to measure both open-set and closed-set speech-recognition performance. Performance of hearing-impaired elderly listeners was compared to two groups of young normal-hearing adults, one listening in quiet, and the other listening in a background of spectrally shaped noise designed to simulate the peripheral hearing loss of the elderly. Votrax synthetic speech yielded significant decrements in speech recognition compared to either natural or DECtalk synthetic speech for all three subject groups. There were no differences in performance between natural speech and DECtalk speech for the elderly hearing-impaired listeners or the young listeners with simulated hearing loss. The normal-hearing young adults listening in quiet out-performed both of the other groups, but there were no differences in performance between the young listeners with simulated hearing loss and the elderly hearing-impaired listeners. When the closed-set identification of synthetic speech was compared to its open-set recognition, the hearing-impaired elderly gained as much from the reduction in stimulus/response uncertainty as the two younger groups. Finally, among the elderly hearing-impaired listeners, speech-recognition performance was correlated negatively with hearing sensitivity, but scores were correlated positively among the different talker conditions. Those listeners with the greatest hearing loss had the most difficulty understanding speech and those having the most trouble understanding natural speech also had the greatest difficulty with synthetic speech.


1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


1986 ◽  
Vol 29 (4) ◽  
pp. 447-462 ◽  
Author(s):  
Larry E. Humes ◽  
Donald D. Dirks ◽  
Theodore S. Bell ◽  
Christopher Ahlstbom ◽  
Gail E. Kincaid

The present article is divided into four major sections dealing with the application of acoustical indices to the prediction of speech recognition performance. In the first section, two acoustical indices, the Articulation Index (AI) and the Speech Transmission Index (STI), are described. In the next section, the effectiveness of the AI and the STI in describing the performance of normal-hearing and hearing-impaired subjects listening to spectrally distorted (filtered) and temporally distorted (reverberant) speech is examined retrospectively. In the third section, the results of a prospective investigation that examined the recognition of nonsense syllables under conditions of babble competition, filtering and reverberation are described. Finally, in the fourth section, the ability of the acoustical indices to describe the performance of 10 hearing-impaired listeners, 5 listening in quiet and 5 in babble, is examined. It is concluded that both the AI and the STI have significant shortcomings. A hybrid index, designated mSTI, which takes the best features from each procedure, is described and demonstrated to be the best alternative presently available.


1990 ◽  
Vol 33 (4) ◽  
pp. 726-735 ◽  
Author(s):  
Larry E. Humes ◽  
Lisa Roberts

The role that sensorineural hearing loss plays in the speech-recognition difficulties of the hearing-impaired elderly is examined. One approach to this issue was to make between-group comparisons of performance for three groups of subjects: (a) young normal-hearing adults; (b) elderly hearing-impaired adults; and (c) young normal-hearing adults with simulated sensorineural hearing loss equivalent to that of the elderly subjects produced by a spectrally shaped masking noise. Another approach to this issue employed correlational analyses to examine the relation between audibility and speech recognition within the group of elderly hearing-impaired subjects. An additional approach was pursued in which an acoustical index incorporating adjustments for threshold elevation was used to examine the role audibility played in the speech-recognition performance of the hearing-impaired elderly. A wide range of listening conditions was sampled in this experiment. The conclusion was that the primary determiner of speech-recognition performance in the elderly hearing-impaired subjects was their threshold elevation.


2006 ◽  
Vol 27 (3) ◽  
pp. 263-278 ◽  
Author(s):  
Matthew H. Burk ◽  
Larry E. Humes ◽  
Nathan E. Amos ◽  
Lauren E. Strauser

Author(s):  
Amin Ebrahimi ◽  
Mohammad Ebrahim Mahdavi ◽  
Hamid Jalilvand

Background and Aim: Digits are suitable speech materials for evaluating recognition of speech-in-noise in clients with the wide range of language abilities. Farsi Auditory Recognition of Digit-in-Noise (FARDIN) test has been deve­loped and validated in learning-disabled child­ren showing dichotic listening deficit. This stu­dy was conducted for further validation of FARDIN and to survey the effects of noise type on the recognition performance in individuals with sensory-neural hearing impairment. Methods: Persian monosyllabic digits 1−10 were extracted from the audio file of FARDIN test. Ten lists were compiled using a random order of the triplets. The first five lists were mixed with multi-talker babble noise (MTBN) and the second five lists with speech-spectrum noise (SSN). Signal- to- noise ratio (SNR) var­ied from +5 to −15 in 5 dB steps. 20 normal hearing and 19 hearing-impaired individuals participated in the current study. Results: Both types of noise could differentiate the hearing loss from normal hearing. Hearing-impaired group showed weaker performance for digit recognition in MTBN and SSN and needed 4−5.6 dB higher SNR (50%), compared to the normal hearing group. MTBN was more challenging for normal hearing than SSN. Conclusion: Farsi Auditory Recognition of Digit-in-Noise is a validated test for estimating SNR (50%) in clients with hearing loss. It seems SSN is more appropriate for using as a back­ground noise for testing the performance of aud­itory recognition of digit-in-noise.   Keywords: Auditory recognition; hearing loss; speech perception in noise; digit recognition in noise


2019 ◽  
Vol 62 (4) ◽  
pp. 1051-1067 ◽  
Author(s):  
Jonathan H. Venezia ◽  
Allison-Graham Martin ◽  
Gregory Hickok ◽  
Virginia M. Richards

Purpose Age-related sensorineural hearing loss can dramatically affect speech recognition performance due to reduced audibility and suprathreshold distortion of spectrotemporal information. Normal aging produces changes within the central auditory system that impose further distortions. The goal of this study was to characterize the effects of aging and hearing loss on perceptual representations of speech. Method We asked whether speech intelligibility is supported by different patterns of spectrotemporal modulations (STMs) in older listeners compared to young normal-hearing listeners. We recruited 3 groups of participants: 20 older hearing-impaired (OHI) listeners, 19 age-matched normal-hearing listeners, and 10 young normal-hearing (YNH) listeners. Listeners performed a speech recognition task in which randomly selected regions of the speech STM spectrum were revealed from trial to trial. The overall amount of STM information was varied using an up–down staircase to hold performance at 50% correct. Ordinal regression was used to estimate weights showing which regions of the STM spectrum were associated with good performance (a “classification image” or CImg). Results The results indicated that (a) large-scale CImg patterns did not differ between the 3 groups; (b) weights in a small region of the CImg decreased systematically as hearing loss increased; (c) CImgs were also nonsystematically distorted in OHI listeners, and the magnitude of this distortion predicted speech recognition performance even after accounting for audibility; and (d) YNH listeners performed better overall than the older groups. Conclusion We conclude that OHI/older normal-hearing listeners rely on the same speech STMs as YNH listeners but encode this information less efficiently. Supplemental Material https://doi.org/10.23641/asha.7859981


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


Sign in / Sign up

Export Citation Format

Share Document