Speech Perception in Adult Subjects With Familial Dyslexia

1992 ◽  
Vol 35 (1) ◽  
pp. 192-200 ◽  
Author(s):  
Michele L. Steffens ◽  
Rebecca E. Eilers ◽  
Karen Gross-Glenn ◽  
Bonnie Jallad

Speech perception was investigated in a carefully selected group of adult subjects with familial dyslexia. Perception of three synthetic speech continua was studied: /a/-//, in which steady-state spectral cues distinguished the vowel stimuli; /ba/-/da/, in which rapidly changing spectral cues were varied; and /sta/-/sa/, in which a temporal cue, silence duration, was systematically varied. These three continua, which differed with respect to the nature of the acoustic cues discriminating between pairs, were used to assess subjects’ abilities to use steady state, dynamic, and temporal cues. Dyslexic and normal readers participated in one identification and two discrimination tasks for each continuum. Results suggest that dyslexic readers required greater silence duration than normal readers to shift their perception from /sa/ to /sta/. In addition, although the dyslexic subjects were able to label and discriminate the synthetic speech continua, they did not necessarily use the acoustic cues in the same manner as normal readers, and their overall performance was generally less accurate.

1980 ◽  
Vol 23 (2) ◽  
pp. 419-428 ◽  
Author(s):  
Rebecca E. Eilers ◽  
D. Kimbrough Oller

The discrimination of minimally paired speech sounds by seven retarded children with a mean age of 3 years, 2 months and a mean IQ of 38.4 was compared with the discrimination performance of eight normally developing 7-month-old infants. Children and infants were tested using the Visually Reinforced Infant Speech Discrimination (VRISD) paradigm in which they were taught to respond with a head turn to a change in a repeating background auditory stimulus. Responses were reinforced by activation of an animated toy. All children proved to be conditionable and both groups evidenced discrimination of the speech contrasts tested. The data suggest that the retarded children have more difficulty processing a contrast cued by rapid spectral changes (often associated with consonant discrimination) than they do a contrast cued by steady-state spectral information (often associated with the perception of slowly articulated vowels): The normally developing infants did not find rapid spectral cues more difficult than steady-state cues. These results parallel those of Tallal (1976) who found that dynamic cues were specifically difficult for dysphasic children (with normal nonverbal intelligence), but not for linguistically-normal elementary school children.


2015 ◽  
Vol 24 (4) ◽  
pp. 462-468 ◽  
Author(s):  
Jessica J. Messersmith ◽  
Lindsey E. Jorgensen ◽  
Jessica A. Hagg

Purpose The purpose of this study was to determine whether an alternate fitting strategy, specifically adjustment to gains in a hearing aid (HA), would improve performance in patients who experienced poorer performance in the bimodal condition when the HA was fit to traditional targets. Method This study was a retrospective chart review from a local clinic population seen during a 6-month period. Participants included 6 users of bimodal stimulation. Two performed poorer in the cochlear implant (CI) + HA condition than in the CI-only condition. One individual performed higher in the bimodal condition, but the overall performance was low. Three age range–matched users whose performance increased when the HA was used in conjunction with a CI were also included. The HA gain was reduced beyond 2000 Hz. Speech perception scores were obtained pre- and postmodification to the HA fitting. Results All listeners whose HA was programmed using the modified approach demonstrated improved speech perception scores with the modified HA fit in the bimodal condition when compared with the traditional HA fit in the bimodal condition. Conclusion Modifications to gains above 2000 Hz in the HA may improve performance for bimodal listeners who perform more poorly in the bimodal condition when the HA is fit to traditional targets.


Author(s):  
Rajinder Koul ◽  
James Dembowski

The purpose of this chapter is to review research conducted over the past two decades on the perception of synthetic speech by individuals with intellectual, language, and hearing impairments. Many individuals with little or no functional speech as a result of intellectual, language, physical, or multiple disabilities rely on non-speech communication systems to augment or replace natural speech. These systems include Speech Generating Devices (SGDs) that produce synthetic speech upon activation. Based on this review, the two main conclusions are evident. The first is that persons with intellectual and/or language impairment demonstrate greater difficulties in processing synthetic speech than their typical matched peers. The second conclusion is that repeated exposure to synthetic speech allows individuals with intellectual and/or language disabilities to identify synthetic speech with increased accuracy and speed. This finding is of clinical significance as it indicates that individuals who use SGDs become more proficient at understanding synthetic speech over a period of time.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


1990 ◽  
Vol 33 (2) ◽  
pp. 229-237 ◽  
Author(s):  
Arlene Earley Carney ◽  
Marjorie Kienle ◽  
Richard T. Miyamoto

Suprasegmental and segmental speech perception tasks were administered to 8 patients with single-channel cochlear implants. Suprasegmental tasks included the recognition of syllable number, syllabic stress, and intonation. Segmental tasks included the recognition of vowels and consonants in three modalities: visual only, implant only, and visual + implant. Results were compared to those obtained from artificially deafened adults using a single-channel vibrotactile device (Carney, 1988; Carney & Beachler, 1986). The patterns of responses for both suprasegmental and segmental tasks were highly similar for both groups of subjects, despite differences between the characteristics of the subject samples. These results suggest that single-channel sensory devices, whether they be cochlear implants or vibrotactile aids, produce similar patterns of speech perception errors, even when differences are observed in overall performance level.


Sign in / Sign up

Export Citation Format

Share Document