Individual differences in auditory abilities among normal‐hearing listeners

2000 ◽  
Vol 108 (5) ◽  
pp. 2641-2642 ◽  
Author(s):  
Gary R. Kidd ◽  
Charles S. Watson ◽  
Brian Gygi
2014 ◽  
Vol 57 (5) ◽  
pp. 1961-1971
Author(s):  
Marianna Vatti ◽  
Sébastien Santurette ◽  
Niels Henrik Pontoppidan ◽  
Torsten Dau

Purpose Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM cues. Method Vibrato maps were obtained in 14 NH and 12 HI listeners with different degrees of musical experience. The FM rate and FM excursion of a synthesized vowel, to which coherent FM was applied, were adjusted until a singing voice emerged. Results In NH listeners, adding FM to the steady vowel components produced perception of a singing voice for FM rates between 4.1 and 7.5 Hz and FM excursions between 17 and 83 cents on average. In contrast, HI listeners showed substantially broader vibrato maps. Individual differences in map boundaries were, overall, not correlated with audibility or frequency selectivity at the vowel fundamental frequency, with no clear effect of musical experience. Conclusion Overall, it was shown that hearing loss affects the perception of a sung vowel based on FM-rate and FM-excursion cues, possibly due to deficits in FM detection or discrimination or to a degraded ability to follow the rate of frequency changes.


1993 ◽  
Vol 4 (2) ◽  
pp. 104-107 ◽  
Author(s):  
Janet K. Jensen ◽  
Donna L. Neff

Intensity (loudness), frequency (pitch), and duration discrimination were examined in 41 normal-hearing children, aged 4 to 6 years, and 9 adults. A second study retested 25 of the youngest children 12 to 18 months later. Intensity discrimination showed the least improvement with age and was adultlike by age 5 for most of the children. In contrast, frequency and duration discrimination showed highly significant improvement with age, hut remained poorer than adults' discrimination for many 6-year-olds. Large individual differences were observed within alt tasks and age groups.


1989 ◽  
Vol 32 (4) ◽  
pp. 944-948 ◽  
Author(s):  
Theodore S. Bell ◽  
Donald D. Dirks ◽  
Gail E. Kincaid

Invariance of error patterns in confusion matrices of varying dimensions were examined. Normal-hearing young adults were presented closed-set arrangements of digitized syllable tokens, spoken by 1 male and 1 female talker, and selected from a set of 14 consonants (stops and fricatives). Each consonant was paired with the vowel /a/ in a vowel-consonant format and presented at three intensity levels. Patterns of errors among voiceless stops and among voiced fricatives were dependent on the set of alternatives. Voiceless fricatives and voiced stops were not significantly affected by the number of response alternatives. Speaker differences, individual differences among listeners, and implications relating to the generalization of confusion data collected in small closed-set arrangements arc discussed.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Daniel Oberfeld ◽  
Felicitas Klöckner-Nowotny

Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


2017 ◽  
Vol 42 (3) ◽  
pp. 351-364
Author(s):  
Monika Kordus ◽  
Jan Żera

AbstractLoudness functions and binaural loudness summation was investigated in acoustically stimulated bilaterally implanted cochlear implant users. The study was aimed at evaluating growth of loudness functions and binaural loudness summation in cochlear implant subjects as a function of stimulus presentation level at different frequencies. Loudness was assessed using a rating procedure on a scale of 0 to 100. Three experimental conditions were tested: monaural right, monaural left and binaural, each with bands of noise with center frequencies of 0.25, 1, and 4 kHz. Fifteen implanted and five normal-hearing subjects (control group) participated in the experiments. Results demonstrated large variability in the slopes of the loudness functions and the presence of loudness summation in bilateral cochlear implant users, with large individual differences among subjects.


Sign in / Sign up

Export Citation Format

Share Document