scholarly journals High- and Low-Performing Adult Cochlear Implant Users on High-Variability Sentence Recognition: Differences in Auditory Spectral Resolution and Neurocognitive Functioning

2020 ◽  
Vol 31 (05) ◽  
pp. 324-335
Author(s):  
Terrin N. Tamati ◽  
Christin Ray ◽  
Kara J. Vasil ◽  
David B. Pisoni ◽  
Aaron C. Moberly

Abstract Background Postlingually deafened adult cochlear implant (CI) users routinely display large individual differences in the ability to recognize and understand speech, especially in adverse listening conditions. Although individual differences have been linked to several sensory (‘‘bottom-up’') and cognitive (‘‘top-down’') factors, little is currently known about the relative contributions of these factors in high- and low-performing CI users. Purpose The aim of the study was to investigate differences in sensory functioning and neurocognitive functioning between high- and low-performing CI users on the Perceptually Robust English Sentence Test Open-set (PRESTO), a high-variability sentence recognition test containing sentence materials produced by multiple male and female talkers with diverse regional accents. Research Design CI users with accuracy scores in the upper (HiPRESTO) or lower quartiles (LoPRESTO) on PRESTO in quiet completed a battery of behavioral tasks designed to assess spectral resolution and neurocognitive functioning. Study Sample Twenty-one postlingually deafened adult CI users, with 11 HiPRESTO and 10 LoPRESTO participants. Data Collection and Analysis A discriminant analysis was carried out to determine the extent to which measures of spectral resolution and neurocognitive functioning discriminate HiPRESTO and LoPRESTO CI users. Auditory spectral resolution was measured using the Spectral-Temporally Modulated Ripple Test (SMRT). Neurocognitive functioning was assessed with visual measures of working memory (digit span), inhibitory control (Stroop), speed of lexical/phonological access (Test of Word Reading Efficiency), and nonverbal reasoning (Raven's Progressive Matrices). Results HiPRESTO and LoPRESTO CI users were discriminated primarily by performance on the SMRT and secondarily by the Raven's test. No other neurocognitive measures contributed substantially to the discriminant function. Conclusions High- and low-performing CI users differed by spectral resolution and, to a lesser extent, nonverbal reasoning. These findings suggest that the extreme groups are determined by global factors of richness of sensory information and domain-general, nonverbal intelligence, rather than specific neurocognitive processing operations related to speech perception and spoken word recognition. Thus, although both bottom-up and top-down information contribute to speech recognition performance, low-performing CI users may not be sufficiently able to rely on neurocognitive skills specific to speech recognition to enhance processing of spectrally degraded input in adverse conditions involving high talker variability.

2014 ◽  
Vol 25 (09) ◽  
pp. 869-892 ◽  
Author(s):  
Terrin N. Tamati ◽  
David B. Pisoni

Background: Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose: The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design: Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample: Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis: Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results: Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions: High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life.


2020 ◽  
Vol 63 (6) ◽  
pp. 1712-1725
Author(s):  
Xin Luo ◽  
Courtney Kolberg ◽  
Kathryn R. Pulling ◽  
Tamiko Azuma

Purpose This study aimed to evaluate the effects of aging and cochlear implant (CI) on psychoacoustic and speech recognition abilities and to assess the relative contributions of psychoacoustic and demographic factors to speech recognition of older CI (OCI) users. Method Twelve OCI users, 12 older acoustic-hearing (OAH) listeners age-matched to OCI users, and 12 younger normal-hearing (YNH) listeners underwent tests of temporal amplitude modulation detection, temporal gap detection in noise, and spectral–temporal modulated ripple discrimination. Speech reception thresholds were measured for sentence recognition in multitalker, speech-babble noise. Results Statistical analyses showed that, for the small sample of OAH listeners, the degree of hearing loss did not significantly affect any outcome measure. Temporal resolution, spectral resolution, and speech recognition all significantly degraded with both age and the use of a CI (i.e., YNH better than OAH and OAH better than OCI performance). Although both were significantly correlated with OCI users' speech recognition, the duration of CI use no longer had a significant effect on speech recognition once the effect of spectral–temporal ripple discrimination performance was taken into account. For OAH listeners, the only significant predictor of speech recognition was temporal gap detection performance. Conclusion The preliminary results suggest that speech recognition of OCI users may improve with longer duration of CI use, mainly due to higher perceptual acuity to spectral–temporal modulated ripples in acoustic stimuli.


2021 ◽  
Vol 42 (10S) ◽  
pp. S33-S41
Author(s):  
Aaron C. Moberly ◽  
Jessica H. Lewis ◽  
Kara J. Vasil ◽  
Christin Ray ◽  
Terrin N. Tamati

2021 ◽  
Vol 32 (07) ◽  
pp. 469-476
Author(s):  
Maria Madalena Canina Pinheiro ◽  
Patricia Cotta Mancini ◽  
Alexandra Dezani Soares ◽  
Ângela Ribas ◽  
Danielle Penna Lima ◽  
...  

Abstract Background Speech recognition in noisy environments is a challenge for both cochlear implant (CI) users and device manufacturers. CI manufacturers have been investing in technological innovations for processors and researching strategies to improve signal processing and signal design for better aesthetic acceptance and everyday use. Purpose This study aimed to compare speech recognition in CI users using off-the-ear (OTE) and behind-the-ear (BTE) processors. Design A cross-sectional study was conducted with 51 CI recipients, all users of the BTE Nucleus 5 (CP810) sound processor. Speech perception performances were compared in quiet and noisy conditions using the BTE sound processor Nucleus 5 (N5) and OTE sound processor Kanso. Each participant was tested with the Brazilian-Portuguese version of the hearing in noise test using each sound processor in a randomized order. Three test conditions were analyzed with both sound processors: (i) speech level fixed at 65 decibel sound pressure level in a quiet, (ii) speech and noise at fixed levels, and (iii) adaptive speech levels with a fixed noise level. To determine the relative performance of OTE with respect to BTE, paired comparison analyses were performed. Results The paired t-tests showed no significant difference between the N5 and Kanso in quiet conditions. In all noise conditions, the performance of the OTE (Kanso) sound processor was superior to that of the BTE (N5), regardless of the order in which they were used. With the speech and noise at fixed levels, a significant mean 8.1 percentage point difference was seen between Kanso (78.10%) and N5 (70.7%) in the sentence scores. Conclusion CI users had a lower signal-to-noise ratio and a higher percentage of sentence recognition with the OTE processor than with the BTE processor.


2007 ◽  
Vol 14 (5) ◽  
pp. 840-845 ◽  
Author(s):  
Kenith V. Sobel ◽  
Matthew P. Gerrie ◽  
Bradley J. Poole ◽  
Michael J. Kane

Author(s):  
Hadeer Derawi ◽  
Eva Reinisch ◽  
Yafit Gabay

AbstractSpeech recognition is a complex human behavior in the course of which listeners must integrate the detailed phonetic information present in the acoustic signal with their general linguistic knowledge. It is commonly assumed that this process occurs effortlessly for most people, but it is still unclear whether this also holds true in the case of developmental dyslexia (DD), a condition characterized by perceptual deficits. In the present study, we used a dual-task setting to test the assumption that speech recognition is effortful for people with DD. In particular, we tested the Ganong effect (i.e., lexical bias on phoneme identification) while participants performed a secondary task of either low or high cognitive demand. We presumed that reduced efficiency in perceptual processing in DD would manifest in greater modulation in the performance of primary task by cognitive load. Results revealed that this was indeed the case. We found a larger Ganong effect in the DD group under high than under low cognitive load, and this modulation was larger than it was for typically developed (TD) readers. Furthermore, phoneme categorization was less precise in the DD group than in the TD group. These findings suggest that individuals with DD show increased reliance on top-down lexically mediated perception processes, possibly as a compensatory mechanism for reduced efficiency in bottom-up use of acoustic cues. This indicates an imbalance between bottom-up and top-down processes in speech recognition of individuals with DD.


Author(s):  
Adam K. Bosen ◽  
Victoria A. Sevich ◽  
Shauntelle A. Cannon

Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.


Sign in / Sign up

Export Citation Format

Share Document