Processing of Cues for Stop Consonant Voicing by Young Hearing-Impaired Listeners

1984 ◽  
Vol 27 (1) ◽  
pp. 112-118 ◽  
Author(s):  
Deborah Johnson ◽  
Patricia Whaley ◽  
M. F. Dorman

To assess whether young hearing-impaired listeners are as sensitive as normal-hearing children to the cues for stop consonant voicing, we presented stimuli from along VOT continua to young normal-hearing listeners and to listeners with mild, moderate, severe, and profound hearing impairments. The response measures were the location of the phonetic boundaries, the change in boundaries with changes in place of articulation, and response variability. The listeners with normal hearing sensitivity and those with mild and moderate hearing impairments did not differ in performance on any response measure. The listeners with severe impairments did not show the expected change in VOT boundary with changes in place of articulation. Moreover, stimulus uncertainty (i.e., the number of possible choices in the response set) affected their response variability. One listener with profound impairment was able to process the cues for voicing in a normal fashion under conditions of minimum stimulus uncertainty. We infer from these results that the cochlear damage which underlies mild and moderate hearing impairment does not significantly alter the auditory representation of VOT. However, the cochlear damage underlying severe impairment, possibly interacting with high signal presentation levels, does alter the auditory representation of VOT.

1990 ◽  
Vol 33 (1) ◽  
pp. 163-173 ◽  
Author(s):  
Brian E. Walden ◽  
Allen A. Montgomery ◽  
Robert A. Prosek ◽  
David B. Hawkins

Intersensory biasing occurs when cues in one sensory modality influence the perception of discrepant cues in another modality. Visual biasing of auditory stop consonant perception was examined in two related experiments in an attempt to clarify the role of hearing impairment on susceptibility to visual biasing of auditory speech perception. Fourteen computer-generated acoustic approximations of consonant-vowel syllables forming a /ba-da-ga/ continuum were presented for labeling as one of the three exemplars, via audition alone and in synchrony with natural visual articulations of /ba/ and of /ga/. Labeling functions were generated for each test condition showing the percentage of /ba/, /da/, and /ga/ responses to each of the 14 synthetic syllables. The subjects of the first experiment were 15 normal-hearing and 15 hearing-impaired observers. The hearing-impaired subjects demonstrated a greater susceptibility to biasing from visual cues than did the normal-hearing subjects. In the second experiment, the auditory stimuli were presented in a low-level background noise to 15 normal-hearing observers. A comparison of their labeling responses with those from the first experiment suggested that hearing-impaired persons may develop a propensity to rely on visual cues as a result of long-term hearing impairment. The results are discussed in terms of theories of intersensory bias.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


2001 ◽  
Vol 44 (5) ◽  
pp. 964-974 ◽  
Author(s):  
Mark Hedrick ◽  
Mary Sue Younger

The current study explored the changes in weighting of relative amplitude and formant transition cues that may be caused by a K-amp circuit. Twelve listeners with normal hearing and 3 listeners with sensorineural hearing loss labeled the stop consonant place of articulation of synthetic consonant-vowel stimuli. Within the stimuli, two acoustic cues were varied: the frequency of the onset of the second and third formant (F2/F3) transitions and the relative amplitude between the consonant burst and the following vowel in the fourth and fifth formant (F4/ F5) frequency region. The variation in the two cues ranged from values appropriate for a voiceless labial stop consonant to a voiceless alveolar stop consonant. The listeners labeled both the unaided stimuli and the stimuli recorded through a hearing aid with a K-amp circuit. An analysis of variance (ANOVA) model was used to calculate the perceptual weight given each cue. Data from listeners with normal hearing show a change in relative weighting of cues between aided and unaided stimuli. Pilot data from the listeners with hearing loss show a more varied pattern, with more weight placed on relative amplitude. These results suggest that calculation of perceptual weights using an ANOVA model may be worthwhile in future studies examining the relationship between acoustic information presented by a hearing aid and the subsequent perception by the listener with hearing loss.


1982 ◽  
Vol 25 (4) ◽  
pp. 600-607 ◽  
Author(s):  
Andre-Pierre Benguerel ◽  
Margaret Kathleen Pichora-Fuller

Normal-hearing and hearing-impaired subjects with good lipreading skills lipread videotaped material under visual-only conditions. V 1 CV 2 utterances were used where V could he /i/, /æ/ or/u/ and C could be /p/, /t/, /k/, /t∫/, /f/, /Θ/, /s/, /∫/ or/w/.Coarticulatory effects were present in these stimuli. The influence of phonetic context on lipreading scores for each V and C was analyzed in an effort to explain some of the variability in the visual perception of phonemes which was suggested by existing literature. Transmission of information for four phonetic features was also analyzed. Lipreading performance was nearly perfect for/p/,/f7,/w/,/Θ/and/u/. Lipreading performance on/t/,/k/,/t∫/,/∫/,/s/,/i/and/æ/depended on context. The features labial, rounded, and alveolar or palatal place of articulation were found to transmit more information to lipreaders than did the feature continuant. Variability in articulatory parameters resulting from coarticulatory effects appears to increase overall lipreading difficulty.


1997 ◽  
Vol 40 (6) ◽  
pp. 1445-1457 ◽  
Author(s):  
Mark S. Hedrick ◽  
Arlene Earley Carney

Previous studies have shown that manipulation of a particular frequency region of the consonantal portion of a syllable relative to the amplitude of the same frequency region in an adjacent vowel influences the perception of place of articulation. This manipulation has been called the relative amplitude cue. Earlier studies have examined the effect of relative amplitude and formant transition manipulations upon labeling place of articulation for fricatives and stop consonants in listeners with normal hearing. The current study sought to determine if (a) the relative amplitude cue is used by adult listeners wearing a cochlear implant to label place of articulation, and (b) adult listeners wearing a cochlear implant integrated the relative amplitude and formant transition information differently than listeners with normal hearing. Sixteen listeners participated in the study, 12 with normal hearing and 4 postlingually deafened adults wearing the Nucleus 22 electrode Mini Speech Processor implant with the multipeak processing strategy. The stimuli used were synthetic consonant-vowel (CV) syllables in which relative amplitude and formant transitions were manipulated. The two speech contrasts examined were the voiceless fricative contrast /s/-/∫/and the voiceless stop consonant contrast /p/-/t/. For each contrast, listeners were asked to label the consonant sound in the syllable from the two response alternatives. Results showed that (a) listeners wearing this implant could use relative amplitude to consistently label place of articulation, and (b) listeners with normal hearing integrated the relative amplitude and formant transition information more than listeners wearing a cochlear implant, who weighted the relative amplitude information as much as 13 times that of the transition information.


1989 ◽  
Vol 32 (1) ◽  
pp. 133-142 ◽  
Author(s):  
Marleen T. Ochs ◽  
Larry E. Humes ◽  
Ralph N. Ohde ◽  
D. Wesley Grantham

Identification of place of articulation in the synthesized syllables/bi/,/di/, and /gi/ was examined in three groups of listeners: (a) normal hearers, (b) subjects with high-frequency sensorineural hearing loss, and (c) normally hearing subjects listening in noise. Stimuli with an appropriate second formant (F2) transition (moving-F2 stimuli) were compared with stimuli in which F2 was constant (straight-F2 stimuli) to examine the importance of the F2 transition in stop-consonant perception. For straight-F2 stimuli, burst spectrum and F2 frequency were appropriate for the syllable involved. Syllable duration also was a variable, with formant durations of 10, 19, 28, and 44 ms employed. All subjects' identification performance improved as stimulus duration increased. The groups were equivalent in terms of their identification of /di/ and /gi/ syllables, whereas the hearing-impaired and noise-masked normal listeners showed impaired performance for/bi/, particularly for the straight-F2 version. No difference in performance among groups was seen for /di/ and /gi/ stimuli for moving-F2 and straight-F2 versions. Second-formant frequency discrimination measures suggested that subjects' discrimination abilities were not acute enough to take advantage of the formant transition in the /di/and /gi/stimuli.


Sign in / Sign up

Export Citation Format

Share Document