A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists

2016 ◽  
Vol 27 (08) ◽  
pp. 647-660 ◽  
Author(s):  
Andrea R. Plotkowski ◽  
Joshua M. Alexander

Background: Listening in challenging situations requires explicit cognitive resources to decode and process speech. Traditional speech recognition tests are limited in documenting this cognitive effort, which may differ greatly between individuals or listening conditions despite similar scores. A sequential sentence paradigm was designed to be more sensitive to individual differences in demands on verbal processing during speech recognition. Purpose: The purpose of this study was to establish the feasibility, validity, and equivalency of test materials in the sequential sentence paradigm as well as to evaluate the effects of masker type, signal-to-noise ratio (SNR), and working memory (WM) capacity on performance in the task. Research Design: Listeners heard a pair of sentences and repeated aloud the second sentence (immediate recall) and then wrote down the first sentence (delayed recall). Sentence lists were from the Perceptually Robust English Sentence Test Open-set (PRESTO) test. In experiment I, listeners completed a traditional speech recognition task. In experiment II, listeners completed the sequential sentence task at one SNR. In experiment III, the masker type (steady noise versus multitalker babble) and SNR were varied to demonstrate the effects of WM as the speech material increased in difficulty. Study Sample: Young, normal-hearing adults (total n = 53) from the Purdue University community completed one of the three experiments. Data Collection and Analysis: Keyword scoring of the PRESTO lists was completed for both the immediate- and delayed-recall sentences. The Verbal Letter Monitoring task, a test of WM, was used to separate listeners into a low-WM or high-WM group. Results: Experiment I indicated that mean recognition on the single-sentence task was highly variable between the original PRESTO lists. Modest rearrangement of the sentences yielded 18 statistically equivalent lists (mean recognition = 65.0%, range = 64.4–65.7%), which were used in the sequential sentence task in experiment II. In the new test paradigm, recognition of the immediate-recall sentences was not statistically different from the single-sentence task, indicating that there were no cognitive load effects from the delayed-recall sentences. Finally, experiment III indicated that multitalker babble was equally detrimental compared to steady-state noise for immediate recall of sentences for both low- and high-WM groups. On the other hand, delayed recall of sentences in multitalker babble was disproportionately more difficult for the low-WM group compared with the high-WM group. Conclusions: The sequential sentence paradigm is a feasible test format with mostly equivalent lists. Future studies using this paradigm may need to consider individual differences in WM to see the full range of effects across different conditions. Possible applications include testing the efficacy of various signal-processing techniques in clinical populations.

2015 ◽  
Vol 26 (06) ◽  
pp. 582-594 ◽  
Author(s):  
Kathleen F. Faulkner ◽  
Terrin N. Tamati ◽  
Jaimie L. Gilbert ◽  
David B. Pisoni

Background: There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists. Purpose: The purpose of this study was to establish list equivalency of PRESTO for clinical use. Research Design: PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim). Study Sample: Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study. Data Collection and Analysis: Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener). Results: Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions. Conclusions: PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols.


Author(s):  
Adam K. Bosen ◽  
Victoria A. Sevich ◽  
Shauntelle A. Cannon

Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.


2021 ◽  
Vol 11 (6) ◽  
pp. 703
Author(s):  
Eneko Antón ◽  
Jon Andoni Duñabeitia

In bilingual communities, social interactions take place in both single- and mixed-language contexts. Some of the information shared in multilingual conversations, such as interlocutors’ personal information, is often required in consequent social encounters. In this study, we explored whether the autobiographical information provided in a single-language context is better remembered than in an equivalent mixed-language situation. More than 400 Basque-Spanish bilingual (pre) teenagers were presented with new persons who introduced themselves by either using only Spanish or only Basque, or by inter-sententially mixing both languages. Different memory measures were collected immediately after the initial exposure to the new pieces of information (immediate recall and recognition) and on the day after (delayed recall and recognition). In none of the time points was the information provided in a mixed-language fashion worse remembered than that provided in a strict one-language context. Interestingly, the variability across participants in their sociodemographic and linguistic variables had a negligible impact on the effects. These results are discussed considering their social and educational implications for bilingual communities.


2011 ◽  
Vol 17 (4) ◽  
pp. 654-662 ◽  
Author(s):  
Robert M. Chapman ◽  
Mark Mapstone ◽  
Margaret N. Gardner ◽  
Tiffany C. Sandoval ◽  
John W. McCrary ◽  
...  

AbstractWe analyzed verbal episodic memory learning and recall using the Logical Memory (LM) subtest of the Wechsler Memory Scale-III to determine how gender differences in AD compare to those seen in normal elderly and whether or not these differences impact assessment of AD. We administered the LM to both an AD and a Control group, each comprised of 21 men and 21 women, and found a large drop in performance from normal elders to AD. Of interest was a gender interaction whereby the women's scores dropped 1.6 times more than the men's did. Control women on average outperformed Control men on every aspect of the test, including immediate recall, delayed recall, and learning. Conversely, AD women tended to perform worse than AD men. Additionally, the LM achieved perfect diagnostic accuracy in discriminant analysis of AD versus Control women, a statistically significantly higher result than for men. The results indicate the LM is a more powerful and reliable tool in detecting AD in women than in men. (JINS, 2011, 17, 654–662)


2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2019 ◽  
Author(s):  
Andreas B. Neubauer ◽  
Veronika Lerche ◽  
Friederike Köhler ◽  
andreas voss

We compared two approaches towards assessing inter-individual differences in the effect of satisfaction and frustration of basic needs (autonomy, competence, relatedness) on well-being: perceived need effects (beliefs about the effect of need fulfillment on one’s well-being) and experienced need effects (the within-person coupling of need fulfillment and well-being). In two studies (total N=1,281), participants reported perceived need effects in a multidimensional way. In Study 2, daily need fulfillment and affective well-being were additionally assessed (daily-diary study; ten days). Associations between perceived and experienced need effects were significant (albeit small) for all three frustration dimensions, but only for one satisfaction dimension (relatedness), suggesting that they capture different constructs and might be related to different outcomes.


2021 ◽  
Vol 13 ◽  
Author(s):  
Larry E. Humes

Many older adults have difficulty understanding speech in noisy backgrounds. In this study, we examined peripheral auditory, higher-level auditory, and cognitive factors that may contribute to such difficulties. A convenience sample of 137 volunteer older adults, 90 women, and 47 men, ranging in age from 47 to 94 years (M = 69.2 and SD = 10.1 years) completed a large battery of tests. Auditory tests included measures of pure-tone threshold, clinical and psychophysical, as well as two measures of gap-detection threshold and four measures of temporal-order identification. The latter included two monaural and two dichotic listening conditions. In addition, cognition was assessed using the complete Wechsler Adult Intelligence Scale-3rd Edition (WAIS-III). Two monaural measures of speech-recognition threshold (SRT) in noise, the QuickSIN, and the WIN, were obtained from each ear at relatively high presentation levels of 93 or 103 dB SPL to minimize audibility concerns. Group data, both aggregate and by age decade, were evaluated initially to allow comparison to data in the literature. Next, following the application of principal-components factor analysis for data reduction, individual differences in speech-recognition-in-noise performance were examined using multiple-linear-regression analyses. Excellent fits were obtained, accounting for 60–77% of the total variance, with most accounted for by the audibility of the speech and noise stimuli and the severity of hearing loss with the balance primarily associated with cognitive function.


Sign in / Sign up

Export Citation Format

Share Document