Effects of talker gender and spatial location on sentence recognition for young and old listeners with normal hearing

2011 ◽  
Vol 7 (2) ◽  
pp. 145-152
Author(s):  
Jae Hee Lee ◽  
Hyun Mee Chang
CoDAS ◽  
2015 ◽  
Vol 27 (2) ◽  
pp. 148-154 ◽  
Author(s):  
Maristela Julio Costa ◽  
Sinéia Neujahr dos Santos ◽  
Alexandre Hundertmarck Lessa ◽  
Carolina Lisboa Mezzomo

Purpose: To present and describe a new strategy and protocol for obtaining the Sentences Recognition Indexes (SRI), using the Lists of Phrases in Portuguese test (LPP), considering words in the analysis of responses; to analyze and compare the results using the previous and the new strategies by checking their applicability and suitability. Methods: To consider each word of the sentence, words were classified according to their importance: functional and content words, assigning them, respectively, two and one points. SRI were obtained in 33 normal hearing adults, and results were compared using the two strategies. Results: A new protocol was established. Each point corresponds to the following percentages in each list: 1B, 1.11%; 2B, 1.13%; 3B, 1.17%; 4B, 1.16%; 5B, 1.20%; and 6B, 1.11%. The median SRI obtained through usual and new strategies were, respectively, for the list 1B: 60 and 82.57%; 2B: 70 and 80.79%; 3B: 50 and 76.60%; 4B: 70 and 82.60%; 5B: 50 and 77.20%; and 6B: 60 and 82.14%. A significant difference was found when these strategies were compared. Conclusion: New strategy and protocol for evaluating the SRI were developed using the LPP test, considering each word of the sentence. When comparing the responses it was noticed that when considering each word in the sentence it is possible to scale, more detailed and less variability, the actual ability to recognize speech of each individual, the new strategy and protocol developed confirmed its applicability and suitability to assess Sentence Recognition Indexes in Quiet in individuals with hearing disorders in a specific listening condition.


2016 ◽  
Vol 59 (1) ◽  
pp. 110-121 ◽  
Author(s):  
Marc Brennan ◽  
Ryan McCreery ◽  
Judy Kopun ◽  
Dawna Lewis ◽  
Joshua Alexander ◽  
...  

Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.


2012 ◽  
Vol 23 (09) ◽  
pp. 686-696 ◽  
Author(s):  
Andrew Stuart ◽  
Alyson K. Butler

Background: One purported role of the medial olivocochlear (MOC) efferent system is to reduce the effects of masking noise. MOC system functioning can be evaluated noninvasively in humans through contralateral suppression of otoacoustic emissions. It has been suggested that the strength of the MOC efferent activity should be positively associated with listening performance in noise. Purpose: The objective of the study was to further explore this notion by examining contralateral suppression of transient evoked otoacoustic emissions (TEOAEs) and sentence recognition in two noises with normal hearing young adults. Research Design: A repeated measures multivariate quasi-experimental design was employed. Study Sample: Thirty-two normal hearing young adult females participated. Data Collection and Analysis: Reception thresholds for sentences (RTSs) were determined monaurally and binaurally in quiet and in competing continuous and interrupted noises. Both noises had an identical power spectrum and differed only in their temporal continuity. “Release from masking” was computed by subtracting RTS signal-to-noise ratios in interrupted from continuous noise. TEOAEs were evoked with 80 dB peSPL click stimuli. To examine contralateral suppression, TEOAEs were evaluated with 60 dB peSPL click stimuli with and without a contralateral 65 dB SPL white noise suppressor. Results: A binaural advantage was observed for RTSs in quiet and noise (p < .0001) while there was no difference between ears (p >.05). In noise, performance was superior in the interrupted noise (i.e., RTSs were lower vs. continuous noise; p < .0001). There were no statistically significant differences in TEOAE levels between ears (p >.05). There was also no significant difference in the amount of suppression between ears (p = .41). There were no significant correlations or predictive linear relations between the amount of TEOAE suppression and any indices of sentence recognition in noise (i.e., RTS signal-to-noise ratios and release from masking; p > .05). Conclusions: The findings are not consistent with the notion that increased medial olivocochlear efferent feedback, as assessed via contralateral suppression of TEOAEs, is associated with improved speech perception in continuous and interrupted noise.


2017 ◽  
Vol 60 (4) ◽  
pp. 1046-1061 ◽  
Author(s):  
Aaron C. Moberly ◽  
Michael S. Harris ◽  
Lauren Boyce ◽  
Susan Nittrouer

Purpose Models of speech recognition suggest that “top-down” linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Method Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Results Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Conclusion Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users.


2014 ◽  
Vol 57 (2) ◽  
pp. 532-554 ◽  
Author(s):  
Caili Ji ◽  
John J. Galvin ◽  
Yi-ping Chang ◽  
Anting Xu ◽  
Qian-Jie Fu

Purpose The aim of this study was to evaluate the understanding of English sentences produced by native (English) and nonnative (Spanish) talkers by listeners with normal hearing (NH) and listeners with cochlear implants (CIs). Method Sentence recognition in noise was measured in adult subjects with CIs and subjects with NH, all of whom were native talkers of American English. Test sentences were from the Hearing in Noise Test (HINT) database and were produced in English by four native and eight nonnative talkers. Subjects also rated the intelligibility and accent for each talker. Results The speech recognition thresholds in noise of subjects with CIs and subjects with NH were 4.23 dB and 1.32 dB poorer with nonnative talkers than with native talkers, respectively. Performance was significantly correlated with talker intelligibility and accent ratings for subjects with CIs but only correlated with talker intelligibility ratings for subjects with NH. For all subjects, performance with individual nonnative talkers was significantly correlated with talkers' number of years of residence in the United States. Conclusion CI users exhibited a larger deficit in speech understanding with nonnative talkers than did subjects with NH, relative to native talkers. Nonnative talkers' experience with native culture contributed strongly to speech understanding in noise, intelligibility ratings, and accent ratings of both listeners with NH and listeners with CIs.


2019 ◽  
Vol 24 (3) ◽  
pp. 127-138
Author(s):  
Aaron C. Moberly ◽  
Jameson K. Mattingly ◽  
Irina Castellanos

Background: Previous research has demonstrated an association of scores on a visual test of nonverbal reasoning, Raven’s Progressive Matrices (RPM), with scores on open-set sentence recognition in quiet for adult cochlear implant (CI) users as well as for adults with normal hearing (NH) listening to noise-vocoded sentence materials. Moreover, in that study, CI users demonstrated poorer nonverbal reasoning when compared with NH peers. However, it remains unclear what underlying neurocognitive processes contributed to the association of nonverbal reasoning scores with sentence recognition, and to the poorer scores demonstrated by CI users. Objectives: Three hypotheses were tested: (1) nonverbal reasoning abilities of adult CI users and normal-hearing (NH) age-matched peers would be predicted by performance on more basic neurocognitive measures of working memory capacity, information-processing speed, inhibitory control, and concentration; (2) nonverbal reasoning would mediate the effects of more basic neurocognitive functions on sentence recognition in both groups; and (3) group differences in more basic neurocognitive functions would explain the group differences previously demonstrated in nonverbal reasoning. Method: Eighty-three participants (40 CI and 43 NH) underwent testing of sentence recognition using two sets of sentence materials: sentences produced by a single male talker (Harvard sentences) and high-variability sentences produced by multiple talkers (Perceptually Robust English Sentence Test Open-set, PRESTO). Participants also completed testing of nonverbal reasoning using a visual computerized RPM test, and additional neurocognitive assessments were collected using a visual Digit Span test and a Stroop Color-Word task. Multivariate regression analyses were performed to test our hypotheses while treating age as a covariate. Results: In the CI group, information processing speed on the Stroop task predicted RPM performance, and RPM scores mediated the effects of information processing speed on sentence recognition abilities for both Harvard and PRESTO sentences. In contrast, for the NH group, Stroop inhibitory control predicted RPM performance, and a trend was seen towards RPM scores mediating the effects of inhibitory control on sentence recognition, but only for PRESTO sentences. Poorer RPM performance in CI users than NH controls could be partially attributed to slower information processing speed. Conclusions: Neurocognitive functions contributed differentially to nonverbal reasoning performance in CI users as compared with NH peers, and nonverbal reasoning appeared to partially mediate the effects of these different neurocognitive functions on sentence recognition in both groups, at least for PRESTO sentences. Slower information processing speed accounted for poorer nonverbal reasoning scores in CI users. Thus, it may be that prolonged auditory deprivation contributes to cognitive decline through slower information processing.


Sign in / Sign up

Export Citation Format

Share Document