scholarly journals The neural encoding of formant frequencies contributing to vowel identification in normal-hearing listeners

2016 ◽  
Vol 139 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Jong Ho Won ◽  
Kelly Tremblay ◽  
Christopher G. Clinard ◽  
Richard A. Wright ◽  
Elad Sagi ◽  
...  
1997 ◽  
Vol 40 (6) ◽  
pp. 1434-1444 ◽  
Author(s):  
Kathryn Hoberg Arehart ◽  
Catherine Arriaga King ◽  
Kelly S. McLean-Mudgett

This study compared the ability of listeners with normal hearing and listeners with moderate to moderately-severe sensorineural hearing loss to use fundamental frequency differences (ΔF 0 ) in the identification of monotically presented simultaneous vowels. Two psychophysical procedures, double vowel identification and masked vowel identification, were used to measure identification performance as a function of ΔF 0 (0 through 8 semitones) between simultaneous vowels. Performance in the double vowel identification task was measured by the percentage of trials in which listeners correctly identified both vowels in a double vowel. The masked vowel identification task yielded thresholds representing signal-to-noise ratios at which listeners could just identify target vowels in the presence of a masking vowel. In the double vowel identification task, both listeners with normal hearing and listeners with hearing loss showed significant ΔF 0 benefit: Between 0 and 2 semitones, listeners with normal hearing showed an 18.5% average increase in performance; listeners with hearing loss showed a 16.5% average increase. In the masked vowel identification task, both groups showed significant ΔF 0 benefit. However, the mean benefit associated with ΔF 0 differences in the masked vowel task was more than twice as large in listeners with normal hearing 9.4 dB) when compared to listeners with hearing loss (4.4 dB), suggesting less ΔF 0 benefit in listeners with hearing loss. In both tasks, overall performance of listeners with hearing loss was significantly worse than performance of listeners with normal hearing. Possible reasons for reduced ΔF 0 benefit and decreased overall performance in listeners with hearing loss include reduced audibility of vowel sounds and deficits in spectro-temporal processing.


1997 ◽  
Vol 28 (1) ◽  
pp. 77-85 ◽  
Author(s):  
Carole E. Johnson ◽  
Ramona L. Stein ◽  
Alicia Broadway ◽  
Tamatha S. Markwalter

The purpose of this study was to assess the consonant and vowel identification abilities of 12 children with minimal high-frequency hearing loss, 12 children with normal hearing, and 12 young adults with normal hearing using nonsense syllables recorded in a classroom with reverberation time of 0.7 s in two conditions of: (1) quiet and (2) noise (+13 dB S/N against a multi-talker babble). The young adults achieved significantly higher mean consonant and vowel identification scores than both groups of children. The children with normal hearing had significantly higher mean consonant identification scores in quiet than the children with minimal high-frequency hearing loss, but the groups performances did not differ in noise. Further, the two groups of children did not differ in vowel identification performance. Listeners’ responses to consonant stimuli were converted to confusion matrices and submitted to a sequential information analysis (SINFA, Wang & Bilger, 1973). The SINFA determined that the amount of information transmitted, both overall and for individual features, differed as a function of listener group ad listening condition.


1971 ◽  
Vol 14 (4) ◽  
pp. 848-857 ◽  
Author(s):  
Norman P. Erber

Two talkers with normal hearing and speech presented with voice 240 common nouns (80 monosyllables, 80 trochees, 80 spondees) to six profoundly deaf children whose task was to lipread without acoustic cues at distances from 5-100 ft. Under bright, shadow-free illumination, lipreading performance diminished from 75% correct at 5 ft to 11% correct at 100 ft. Scores varied with distance similarly for both talkers. The stress patterns of the stimulus words influenced their intelligibility, with scores decreasing from spondees to trochees to monosyllables. In a supplementary study, one talker presented two tests of phoneme recognition to the same six deaf children whose task was to lipread from 5, 20, or 70 ft. Identification of consonants in VCV context depended on their place of articulation (front superior to back) and on the surrounding vowel (/a–a/ superior to /i–i/ or /u–u/). Vowel-identification scores were less dependent on distance than were consonant-identification scores. In general, tense (stressed) vowels were more easily identified in /b/-V-/b/ context than were lax (unstressed) vowels.


2020 ◽  
Vol 24 ◽  
pp. 233121652092007
Author(s):  
Michael F. Dorman ◽  
Sarah Cook Natale ◽  
Leslie Baxter ◽  
Daniel M. Zeitler ◽  
Matthew L. Carlson ◽  
...  

Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similarity of clean signals presented to their CI and modified signals presented to their normal-hearing ear. The signals to the normal-hearing ear were created by (a) filtering, (b) spectral smearing, (c) changing overall fundamental frequency (F0), (d) F0 contour flattening, (e) changing formant frequencies, (f) altering resonances and ring times to create a metallic sound quality, (g) using a noise vocoder, or (h) using a sine vocoder. The operations could be used singly or in any combination. On a scale of 1 to 10 where 10 was a complete match to the sound of the CI, the mean match score was 8.8. Over half of the matches were 9.0 or higher. The most common alterations to a clean signal were band-pass or low-pass filtering, spectral peak smearing, and F0 contour flattening. On average, 3.4 operations were used to create a match. Upshifts in formant frequencies were implemented most often for electrode insertion angles less than approximately 500°. A relatively small set of operations can produce signals that approximate the sound of the MED-EL CI. There are large individual differences in the combination of operations needed. The sound files in Supplemental Material approximate the sound of the MED-EL CI for patients fit with 28-mm electrode arrays.


1994 ◽  
Vol 37 (4) ◽  
pp. 952-959 ◽  
Author(s):  
Robin S. Waldstein ◽  
Shari R. Baum

Two experiments investigated the perception of coarticulatory cues in the speech of children with profound hearing loss and children with normal hearing. To examine anticipatory coarticulation, five repetitions of the syllables [∫i ∫u ti tu ki ku] produced by nine children with hearing loss and nine children with normal hearing were edited to include only the aperiodic consonantal portion. To explore perseveratory coarticulation, comparable segments were excised from the syllables [i∫ u∫ it ut ik uk]. The stimuli had been analyzed previously in two acoustic studies of coarticulation (Baum & Waldstein, 1991; Waldstein & Baum, 1991). Ten listeners were presented with the aperiodic segment and were asked to identify the missing vowel. Overall, listeners’ vowel identification was better for the productions by children with normal hearing than for those by children with hearing loss. In anticipatory contexts, listeners were able to identify the absent vowel with better-than-chance accuracy for all productions by both groups except the [i] tokens following [∫] produced by children with hearing loss. In perseveratory contexts, identification accuracy was significantly above chance for all except the [i] tokens preceding [t] produced by children with normal hearing, but only for [u] tokens produced by children with hearing loss. Identification accuracy was better in anticipatory than in perseveratory contexts for both speaker groups’ productions. The patterning of vowel identification, however, differed for the two speaker groups in anticipatory but not perseveratory contexts.


Sign in / Sign up

Export Citation Format

Share Document