Individual differences in the processing of speech and nonspeech sounds by normal-hearing listeners

2001 ◽  
Vol 110 (4) ◽  
pp. 2085-2095 ◽  
Author(s):  
Aimée M. Surprenant ◽  
Charles S. Watson
2014 ◽  
Vol 57 (5) ◽  
pp. 1961-1971
Author(s):  
Marianna Vatti ◽  
Sébastien Santurette ◽  
Niels Henrik Pontoppidan ◽  
Torsten Dau

Purpose Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM cues. Method Vibrato maps were obtained in 14 NH and 12 HI listeners with different degrees of musical experience. The FM rate and FM excursion of a synthesized vowel, to which coherent FM was applied, were adjusted until a singing voice emerged. Results In NH listeners, adding FM to the steady vowel components produced perception of a singing voice for FM rates between 4.1 and 7.5 Hz and FM excursions between 17 and 83 cents on average. In contrast, HI listeners showed substantially broader vibrato maps. Individual differences in map boundaries were, overall, not correlated with audibility or frequency selectivity at the vowel fundamental frequency, with no clear effect of musical experience. Conclusion Overall, it was shown that hearing loss affects the perception of a sung vowel based on FM-rate and FM-excursion cues, possibly due to deficits in FM detection or discrimination or to a degraded ability to follow the rate of frequency changes.


1993 ◽  
Vol 4 (2) ◽  
pp. 104-107 ◽  
Author(s):  
Janet K. Jensen ◽  
Donna L. Neff

Intensity (loudness), frequency (pitch), and duration discrimination were examined in 41 normal-hearing children, aged 4 to 6 years, and 9 adults. A second study retested 25 of the youngest children 12 to 18 months later. Intensity discrimination showed the least improvement with age and was adultlike by age 5 for most of the children. In contrast, frequency and duration discrimination showed highly significant improvement with age, hut remained poorer than adults' discrimination for many 6-year-olds. Large individual differences were observed within alt tasks and age groups.


1989 ◽  
Vol 32 (4) ◽  
pp. 944-948 ◽  
Author(s):  
Theodore S. Bell ◽  
Donald D. Dirks ◽  
Gail E. Kincaid

Invariance of error patterns in confusion matrices of varying dimensions were examined. Normal-hearing young adults were presented closed-set arrangements of digitized syllable tokens, spoken by 1 male and 1 female talker, and selected from a set of 14 consonants (stops and fricatives). Each consonant was paired with the vowel /a/ in a vowel-consonant format and presented at three intensity levels. Patterns of errors among voiceless stops and among voiced fricatives were dependent on the set of alternatives. Voiceless fricatives and voiced stops were not significantly affected by the number of response alternatives. Speaker differences, individual differences among listeners, and implications relating to the generalization of confusion data collected in small closed-set arrangements arc discussed.


2000 ◽  
Vol 108 (5) ◽  
pp. 2641-2642 ◽  
Author(s):  
Gary R. Kidd ◽  
Charles S. Watson ◽  
Brian Gygi

eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Daniel Oberfeld ◽  
Felicitas Klöckner-Nowotny

Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2021 ◽  
Vol 25 ◽  
pp. 233121652110499
Author(s):  
Erin M. Picou ◽  
Lori Rakita ◽  
Gabrielle H. Buono ◽  
Travis M. Moore

Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech sounds compared to their peers with normal hearing. The purpose of this study was to evaluate two possible strategies for addressing the effects of hearing loss on emotional responses: (a) increasing overall level and (b) hearing aid use (with and without nonlinear frequency compression, NFC). Twenty-three adults (mean age  =  65.5 years) with mild-to-severe sensorineural hearing loss and 17 adults (mean age  =  56.2 years) with normal hearing participated. All adults provided ratings of valence and arousal without hearing aids in response to nonspeech sounds presented at a moderate and at a high level. Adults with hearing loss also provided ratings while using individually fitted study hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant sounds as less pleasant than their peers, confirming findings in the extant literature. For both groups, increasing the overall level resulted in lower ratings of valence. For listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower ratings of valence but to a lesser extent than increasing the overall level. Activating NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate presentation level) but did not improve ratings to match those from the listeners with normal hearing. These findings suggest that current interventions do not ameliorate the effects of hearing loss on emotional responses to sound.


Sign in / Sign up

Export Citation Format

Share Document