Phoneme Feature Perception in Noise by Normal-Hearing and Hearing-Impaired Subjects

1985 ◽  
Vol 28 (1) ◽  
pp. 87-95 ◽  
Author(s):  
Sandra Gordon-Salant

The purpose of this investigation was to determine whether normal-hearing and hearing-impaired listeners perceive phoneme features differently in noise and to determine whether phoneme perception changes as a fuction of signal-to-noise ratio (S/N). Consonant-vowel recognition by normal-hearing and hearing-impaired listeners was assessed in quiet and in three noise conditions. Analysis of total percent correct recognition scores revealed significant effects of hearing status, S/N, and vowel context. Patterns of phoneme errors were analyzed by INDSCAL, Derived consonant features that accounted for phoneme errors by both subject groups were similar to ones reported by other investigators. However, weightings associated with the individual features varied with changes in noise condition. Although hearing-impaired listeners exhibited poorer overall nonsense syllable recognition scores in noise than normal-hearing listeners, no specific set of features emerged from the multidimensional scaling procedures that could uniquely account for this performance deficit.

2005 ◽  
Vol 48 (5) ◽  
pp. 1165-1186 ◽  
Author(s):  
Tracy S. Fitzgerald ◽  
Beth A. Prieve

Although many distortion-product otoacoustic emissions (DPOAEs) may be measured in the ear canal in response to 2 pure tone stimuli, the majority of clinical studies have focused exclusively on the DPOAE at the frequency 2f1-f2. This study investigated another DPOAE, 2f2-f1, in an attempt to determine the following: (a) the optimal stimulus parameters for its clinical measurement and (b) its utility in differentiating between normal-hearing and hearing-impaired ears at low-to-mid frequencies (≤2000 Hz) when measured either alone or in conjunction with the 2f1-f2 DPOAE. Two experiments were conducted. In Experiment 1, the effects of primary level, level separation, and frequency separation (f2/f1) on 2f2-f1 DPOAE level were evaluated in normal-hearing ears for low-to-mid f2 frequencies (700–2000 Hz). Moderately high-level primaries (60–70 dB SPL) presented at equal levels or with f2 slightly higher than f1 produced the highest 2f2-f1 DPOAE levels. When the f2/f1 ratio that produced the highest 2f2-f1 DPOAE levels was examined across participants, the mean optimal f2/f1 ratio across f2 frequencies and primary level separations was 1.08. In Experiment 2, the accuracy with which DPOAE level or signal-to-noise ratio identified hearing status at the f2 frequency as normal or impaired was evaluated using clinical decision analysis. The 2f2-f1 and 2f1-f2 DPOAEs were measured from both normal-hearing and hearing-impaired ears using 2 sets of stimulus parameters: (a) the traditional parameters for measuring the 2f1-f2 DPOAE (f2/f1 = 1.22; L1, L2 = 65, 55 dB SPL) and (b) the new parameters that were deemed optimal for the 2f2-f1 DPOAE in Experiment 1 (f2/f1 = 1.073, L1 and L2 = 65 dB SPL). Identification of hearing status using 2f2-f1 DPOAE level and signal-to-noise ratio was more accurate when the new stimulus parameters were used compared with the results achieved when the 2f2-f1 DPOAE was recorded using the traditional parameters. However, identification of hearing status was less accurate for the 2f2-f1 DPOAE measured using the new parameters than for the 2f1-f2 DPOAE measured using the traditional parameters. No statistically significant improvements in test performance were achieved when the information from the 2 DPOAEs was combined, either by summing the DPOAE levels or by using logistic regression analysis.


2004 ◽  
Vol 116 (4) ◽  
pp. 2395-2405 ◽  
Author(s):  
Mead C. Killion ◽  
Patricia A. Niquette ◽  
Gail I. Gudmundsen ◽  
Lawrence J. Revit ◽  
Shilpi Banerjee

1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


2016 ◽  
Vol 59 (1) ◽  
pp. 110-121 ◽  
Author(s):  
Marc Brennan ◽  
Ryan McCreery ◽  
Judy Kopun ◽  
Dawna Lewis ◽  
Joshua Alexander ◽  
...  

Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.


2000 ◽  
Vol 43 (4) ◽  
pp. 902-914 ◽  
Author(s):  
Patricia G. Stelmachowicz ◽  
Brenda M. Hoover ◽  
Dawna E. Lewis ◽  
Reinier W. L. Kortekaas ◽  
Andrea L. Pittman

In this study, the influence of stimulus context and audibility on sentence recognition was assessed in 60 normal-hearing children, 23 hearing-impaired children, and 20 normal-hearing adults. Performance-intensity (PI) functions were obtained for 60 semantically correct and 60 semantically anomalous sentences. For each participant, an audibility index (AI) was calculated at each presentation level, and a logistic function was fitted to rau-transformed percent-correct values to estimate the SPL and AI required to achieve 70% performance. For both types of sentences, there was a systematic age-related shift in the PI functions, suggesting that young children require a higher AI to achieve performance equivalent to that of adults. Improvement in performance with the addition of semantic context was statistically significant only for the normal-hearing 5-year-olds and adults. Data from the hearing-impaired children showed age-related trends that were similar to those of the normal-hearing children, with the majority of individual data falling within the 5th and 95th percentile of normal. The implications of these findings in terms of hearing-aid fitting strategies for young children are discussed.


2019 ◽  
Vol 62 (10) ◽  
pp. 3851-3859
Author(s):  
Jean C. Krause ◽  
Athina Panagos Panagiotopoulos

Purpose Talkers typically use a slow speaking rate when producing clear speech, a speaking style that has been widely shown to improve intelligibility over conversational speech in difficult communication environments. With training, however, talkers can learn to produce a form of clear speech at normal speaking rates that provides young listeners with normal hearing much of the same intelligibility benefit. The purpose of this study was to determine if older listeners with normal hearing can also obtain an intelligibility benefit from clear speech at normal rates. Method Eight older listeners (55–68 years of age) with normal hearing were presented with nonsense sentences from 4 talkers in a background of speech-shaped noise (signal-to-noise ratio = 0 dB). Intelligibility (percent correct key words) was evaluated for conversational and clear speech produced at 2 speaking rates (normal and slow), for a total of 4 conditions: conv/normal, conv/slow, clear/normal, and clear/slow. Results As expected, the clear/slow speaking condition provided a large and robust intelligibility advantage (23 points) over conv/normal speech. The conv/slow condition provided almost as much benefit on average (21 points) but was highly variable across talkers. Notably, the clear/normal speaking condition provided the same size intelligibility advantage (14 points), previously reported for young listeners with normal hearing ( Krause & Braida, 2002 ), thus extending the benefit of clear speech at normal speaking rates to older normal-hearing listeners. Conclusions Applications based on clear/normal speech (e.g., signal processing approaches for hearing aids) have the potential to provide comparable intelligibility improvements to older and younger listeners alike.


1992 ◽  
Vol 35 (2) ◽  
pp. 436-442 ◽  
Author(s):  
John P. Madden ◽  
Lawrence L. Feth

This study compares the temporal resolution of frequency-modulated sinusoids by normal-hearing and hearing-impaired subjects in a discrimination task. One signal increased linearly by 200 Hz in 50 msec. The other was identical except that its trajectory followed a series of discrete steps. Center frequencies were 500, 1000, 2000, and 4000 Hz. As the number of steps was increased, the duration of the individual steps decreased, and the subjects’ discrimination performance monotonically decreased to chance. It was hypothesized that the listeners could not temporally resolve the trajectory of the step signals at short step durations. At equal sensation levels, and at equal sound pressure levels, temporal resolution was significantly reduced for the impaired subjects. The difference between groups was smaller in the equal sound pressure level condition. Performance was much poorer at 4000 Hz than at the other test frequencies in all conditions because of poorer frequency discrimination at that frequency.


1985 ◽  
Vol 28 (3) ◽  
pp. 381-393 ◽  
Author(s):  
Elmer Owens ◽  
Barbara Blazek

A series of VCV nonsense syllables formed with 23 consonants and the vowels //, /i/, /u/, and // was presented on videotape without sound to 5 hearing-impaired adults and 5 adults with normal hearing. The two-fold purpose was (a) to determine whether the two groups would perform the same in their identification of visemes and (b) to observe whether the identification of visemes is influenced by vowel context. There were no differences between the two groups either with respect to the overall percentage of items correct or to the visemes identified. Noticeable differences occurred in viseme identification between the /u/ context and the other 3 vowel contexts; visemes with // differed slightly from those with // and /i/; and there were no differences in viseme identification for // and /i/ contexts. Findings were in general agreement with other studies with respect to the visemes identified, provided it is acknowledged that changes can occur depending on variables such as talkers, stimuli, recording and viewing conditions, training procedures, and statistical criteria. A composite grouping consists of /p,b,m/; /f,v/; /θ,ð/; /w,r/; /t∫,d,∫,/; and /t,d,s,k,n,g,l/.


Sign in / Sign up

Export Citation Format

Share Document