Validation of a simple response-time measure of listening effort

2015 ◽  
Vol 138 (3) ◽  
pp. EL187-EL192 ◽  
Author(s):  
Carina Pals ◽  
Anastasios Sarampalis ◽  
Hedderik van Rijn ◽  
Deniz Başkent
Author(s):  
Chiara Visentin ◽  
Chiara Valzolgher ◽  
Matteo Pellegatti ◽  
Paola Potente ◽  
Francesco Pavani ◽  
...  

1988 ◽  
Vol 6 (2) ◽  
pp. 161-172 ◽  
Author(s):  
Petr Janata ◽  
Daniel Reisberg

We explore the possibility of studying music perception with responsetime measures. Subjects heard either a chord (tonic triad) or scale prime, followed by a single note, and indicated whether the note did or did not belong in the primed key. Overall, the data resemble the tonal hierarchy previously demonstrated with other methods, thus establishing the validity of the response-time measure. In addition, the scale primes superimpose a recency effect on the standard hierarchy, as would be expected from a serially presented stimulus. We discuss what this implies about tonal hierarchies, and the use of response-time measures to study the online processes of music listening. We also report data for nondiatonic tones.


2019 ◽  
Vol 62 (11) ◽  
pp. 4179-4195 ◽  
Author(s):  
Nicola Prodi ◽  
Chiara Visentin

Purpose This study examines the effects of reverberation and noise fluctuation on the response time (RT) to the auditory stimuli in a speech reception task. Method The speech reception task was presented to 76 young adults with normal hearing in 3 simulated listening conditions (1 anechoic, 2 reverberant). Speechlike stationary and fluctuating noise were used as maskers, in a wide range of signal-to-noise ratios. The speech-in-noise tests were presented in a closed-set format; data on speech intelligibility and RT (time elapsed from the offset of the auditory stimulus to the response selection) were collected. A slowing down in RTs was interpreted as an increase in listening effort. Results RTs slowed down in the more challenging signal-to-noise ratios, with increasing reverberation and for stationary compared to fluctuating noise, consistently with a fluctuating masking release scheme. When speech intelligibility was fixed, it was found that the estimated RTs were similar or faster for stationary compared to fluctuating noise, depending on the amount of reverberation. Conclusions The current findings add to the literature on listening effort for listeners with normal hearing by indicating that the addition of reverberation to fluctuating noise increases RT in a speech reception task. The results support the importance of integrating noise and reverberation to provide accurate predictors of real-world performance in clinical settings.


2018 ◽  
Vol 25 (1) ◽  
pp. 35-42 ◽  
Author(s):  
Alice Lam ◽  
Murray Hodgson ◽  
Nicola Prodi ◽  
Chiara Visentin

This study evaluates the speech reception performance of native (L1) and non-native (L2) normal-hearing young adults in acoustical conditions containing varying amounts of reverberation and background noise. Two metrics were used and compared: the intelligibility score and the response time, taken as a behavioral measure of listening effort. Listening tests were conducted in auralized acoustical environments with L1 and L2 English-speaking university students. It was found that even though the two groups achieved the same, close to the maximum accuracy, L2 participants manifested longer response times in every acoustical condition, suggesting an increased involvement of cognitive resources in the speech reception process.


2019 ◽  
Author(s):  
Kevin D Himberger ◽  
Amy Finn ◽  
Christopher John Honey

Statistical learning refers to the process of extracting regularities from the world without feedback. What are the necessary conditions for statistical learning to arise? It has been argued that visual statistical learning (VSL) is “automatic”, such that subjects will passively and even unconsciously extract statistical regularities from streams of visual input as long as they attend to the stimuli. In contrast, our data indicate that simply attending to stimuli is not, on its own, sufficient for learning. In Experiments 1 & 2, we provided incidental exposure to regularities in a stream of images and observed little to zero VSL across a range of conditions. In Experiment 3, we found that explicitly instructing participants to seek regularities dramatically improved their performance on direct measures of learning, but not on an indirect response time measure. Finally, in Experiments 4 & 5, we demonstrated that a methodological confound in prior work using the indirect response time measure could account for some previous evidence of automatic and implicit VSL.Overall, we found very little evidence of learning using direct measures of VSL, and no evidence of learning using an indirect response time measure. Participants who recognized visual sequence regularities in a forced-choice task could also often recreate the sequences when explicitly probed, indicating their knowledge was not entirely implicit. We suggest that some form of active engagement with stimuli may be needed to extract sequential regularities, and that VSL does not occur automatically.


2019 ◽  
Vol 26 (4) ◽  
pp. 275-291
Author(s):  
Chiara Visentin ◽  
Nicola Prodi ◽  
Francesca Cappelletti ◽  
Simone Torresin ◽  
Andrea Gasparella

Listening effort describes the allocation of attentional and cognitive resources for successful listening. In adverse conditions, the mental demands for listening increase, interfering with other cognitive functions. This is especially relevant in learning spaces, where complex tasks that recruit more cognitive resources (e.g. memorization of information and comprehension) are performed by the students. This study focuses on the case of university classrooms and investigates the effects of different types of masking noise on both speech intelligibility and listening effort. Speech-in-noise tests in the Italian language were presented to 25 young adults with normal hearing (13 native and 12 non-native listeners) within an existing university classroom located in Bozen-Bolzano (Italy). The tests were presented in three listening conditions (quiet, stationary noise, and fluctuating noise), grouping the listeners around two locations within the classroom. The task performance was assessed using both speech intelligibility and two proxy measures of listening effort: response time and subjective ratings of effort. Longer response times and higher subjective ratings were taken to reflect increased listening effort. Results in noisy conditions were compared to the quiet condition. A disadvantage in task accuracy performance was found for non-native compared to native listeners; concerning response time, it was found that when the target signal is masked by a fluctuating noise, additional processing time is requested to non-native listeners compared to their native peers. The interaction was not pointed out by subjective ratings, supporting the hypothesis of a different sensitivity to listening conditions of the two proxy measures of listening effort.


2021 ◽  
Vol 25 ◽  
pp. 233121652110180
Author(s):  
Cynthia R. Hunter

A sequential dual-task design was used to assess the impacts of spoken sentence context and cognitive load on listening effort. Young adults with normal hearing listened to sentences masked by multitalker babble in which sentence-final words were either predictable or unpredictable. Each trial began with visual presentation of a short (low-load) or long (high-load) sequence of to-be-remembered digits. Words were identified more quickly and accurately in predictable than unpredictable sentence contexts. In addition, digits were recalled more quickly and accurately on trials on which the sentence was predictable, indicating reduced listening effort for predictable compared to unpredictable sentences. For word and digit recall response time but not for digit recall accuracy, the effect of predictability remained significant after exclusion of trials with incorrect word responses and was thus independent of speech intelligibility. In addition, under high cognitive load, words were identified more slowly and digits were recalled more slowly and less accurately than under low load. Participants’ working memory and vocabulary were not correlated with the sentence context benefit in either word recognition or digit recall. Results indicate that listening effort is reduced when sentences are predictable and that cognitive load affects the processing of spoken words in sentence contexts.


Sign in / Sign up

Export Citation Format

Share Document