The Words-in-Noise (WIN) Test with Multitalker Babble and Speech-Spectrum Noise Maskers

2007 ◽  
Vol 18 (06) ◽  
pp. 522-529 ◽  
Author(s):  
Richard H. Wilson ◽  
Crystal S. Carnell ◽  
Amber L. Cleghorn

The Words-in-Noise (WIN) test uses monosyllabic words in seven signal-to-noise ratios of multitalker babble (MTB) to evaluate the ability of individuals to understand speech in background noise. The purpose of this study was to evaluate the criterion validity of the WIN by comparing recognition performances under MTB and speech-spectrum noise (SSN) using listeners with normal hearing and listeners with hearing loss. The MTB and SSN had identical rms and similar spectra but different amplitude-modulation characteristics. The performances by the listeners with normal hearing, which were 2 dB better in MTB than in SSN, were about 10 dB better than the performances by the listeners with hearing loss, which were about 0.5 dB better in MTB with 56% of the listeners better in MTB and 40% better in SSN. The slopes of the functions for the normal-hearing listeners (8–9%/dB) were steeper than the functions for the listeners with hearing loss (5–6%/dB). The data indicate that the WIN has good criterion validity. La prueba de Palabras en Ruido (WIN) utiliza palabras monosilábicas en siete tasas de señal/ruido de balbuceo de hablantes múltiples (MTB) para evaluar la capacidad de los individuos de entender lenguaje el medio de ruido de fondo. El propósito del estudio fue evaluar el criterio de validez del WIN comparando el desempeño en reconocimiento del lenguaje bajo ruido MTB y con ruido en el espectro del lenguaje (SSN), utilizando sujetos con audición normal y sujetos con hipoacusia. El MTB y el SSN tienen rms idénticos, y espectros similares, pero diferentes características de modulación de la amplitud. El desempeño de los normo-oyentes, que fue 2 dB mejor en MTB que en SSN, fue 10 dB mejor que el desempeño de los sujetos hipoacúsicos, resultando alrededor de 0.5 dB mejor para MTB, con 56% de los sujetos respondiendo mejor en MTB y 40% mejor en SSN. Las pendientes de la funciones para los sujetos normo-oyentes (8–9 %/dB) fueron más empinadas que las funciones de los sujetos hipoacúsicos (5–6 %/dB). Los datos indican que el WIN tiene un buen criterio de validez.

1994 ◽  
Vol 37 (3) ◽  
pp. 655-661 ◽  
Author(s):  
Pamela E. Souza ◽  
Christopher W. Turner

This study examined the contributions of various properties of background noise to the speech recognition difficulties experienced by young and elderly listeners with hearing loss. Three groups of subjects participated: young listeners with normal hearing, young listeners with sensorineural hearing loss, and elderly listeners with sensorineural hearing loss. Sensitivity thresholds up to 4000 Hz of the young and elderly groups of listeners with hearing loss were closely matched, and a high-pass masking noise was added to minimize the contributions of high-frequency (above 4000 Hz) thresholds, which were not closely matched. Speech recognition scores for monosyllables were obtained in the high-pass noise alone and in three noise backgrounds. The latter consisted of high-pass noise plus one of three maskers: speechspectrum noise, speech-spectrum noise temporally modulated by the envelope of multi-talker babble, and multi-talker babble. For all conditions, the groups with hearing impairment consistently scored lower than the group with normal hearing. Although there was a trend toward poorer speech-recognition scores as the masker condition more closely resembled the speech babble, the effect of masker condition was not statistically significant. There was no interaction between group and condition, implying that listeners with normal hearing and listeners with hearing loss are affected similarly by the type of background noise when the long-term spectrum of the masker is held constant. A significant effect of age was not observed. In addition, masked thresholds for pure tones in the presence of the speech-spectrum masker were not different for the young and elderly listeners with hearing loss. These results suggest that, for both steady-state and modulated background noises, difficulties in speech recognition for monosyllables are due primarily, and perhaps exclusively, to the presence of sensorineural hearing loss itself, and not to age-specific factors.


2009 ◽  
Vol 20 (01) ◽  
pp. 028-039 ◽  
Author(s):  
Elizabeth M. Adams ◽  
Robert E. Moore

Purpose: To study the effect of noise on speech rate judgment and signal-to-noise ratio threshold (SNR50) at different speech rates (slow, preferred, and fast). Research Design: Speech rate judgment and SNR50 tasks were completed in a normal-hearing condition and a simulated hearing-loss condition. Study Sample: Twenty-four female and six male young, normal-hearing participants. Results: Speech rate judgment was not affected by background noise regardless of hearing condition. Results of the SNR50 task indicated that, as speech rate increased, performance decreased for both hearing conditions. There was a moderate correlation between speech rate judgment and SNR50 with the various speech rates, such that as judgment of speech rate increased from too slow to too fast, performance deteriorated. Conclusions: These findings can be used to support the need for counseling patients and their families about the potential advantages to using average speech rates or rates that are slightly slowed while conversing in the presence of background noise.


2019 ◽  
Vol 62 (3) ◽  
pp. 758-767 ◽  
Author(s):  
Raymond L. Goldsworthy ◽  
Kali L. Markle

Purpose Speech recognition deteriorates with hearing loss, particularly in fluctuating background noise. This study examined how hearing loss affects speech recognition in different types of noise to clarify how characteristics of the noise interact with the benefits listeners receive when listening in fluctuating compared to steady-state noise. Method Speech reception thresholds were measured for a closed set of spondee words in children (ages 5–17 years) in quiet, speech-spectrum noise, 2-talker babble, and instrumental music. Twenty children with normal hearing and 43 children with hearing loss participated; children with hearing loss were subdivided into groups with cochlear implant (18 children) and hearing aid (25 children) groups. A cohort of adults with normal hearing was included for comparison. Results Hearing loss had a large effect on speech recognition for each condition, but the effect of hearing loss was largest in 2-talker babble and smallest in speech-spectrum noise. Children with normal hearing had better speech recognition in 2-talker babble than in speech-spectrum noise, whereas children with hearing loss had worse recognition in 2-talker babble than in speech-spectrum noise. Almost all subjects had better speech recognition in instrumental music compared to speech-spectrum noise, but with less of a difference observed for children with hearing loss. Conclusions Speech recognition is more sensitive to the effects of hearing loss when measured in fluctuating compared to steady-state noise. Speech recognition measured in fluctuating noise depends on an interaction of hearing loss with characteristics of the background noise; specifically, children with hearing loss were able to derive a substantial benefit for listening in fluctuating noise when measured in instrumental music compared to 2-talker babble.


2012 ◽  
Vol 23 (08) ◽  
pp. 590-605 ◽  
Author(s):  
Richard H. Wilson ◽  
Rachel McArdle ◽  
Kelly L. Watts ◽  
Sherri L. Smith

Background: The Revised Speech Perception in Noise Test (R-SPIN; Bilger, 1984b) is composed of 200 target words distributed as the last words in 200 low-predictability (LP) and 200 high-predictability (HP) sentences. Four list pairs, each consisting of two 50-sentence lists, were constructed with the target word in a LP and HP sentence. Traditionally the R-SPIN is presented at a signal-to-noise ratio (SNR, S/N) of 8 dB with the listener task to repeat the last word in the sentence. Purpose: The purpose was to determine the practicality of altering the R-SPIN format from a single SNR paradigm into a multiple SNR paradigm from which the 50% points for the HP and LP sentences can be calculated. Research Design: Three repeated measures experiments were conducted. Study Sample: Forty listeners with normal hearing and 184 older listeners with pure-tone hearing loss participated in the sequence of experiments. Data Collection and Analysis: The R-SPIN sentences were edited digitally (1) to maintain the temporal relation between the sentences and babble, (2) to establish the SNRs, and (3) to mix the speech and noise signals to obtain SNRs between –1 and 23 dB. All materials were recorded on CD and were presented through an earphone with the responses recorded and analyzed at the token level. For reference purposes the Words-in-Noise Test (WIN) was included in the first experiment. Results: In Experiment 1, recognition performances by listeners with normal hearing were better than performances by listeners with hearing loss. For both groups, performances on the HP materials were better than performances on the LP materials. Performances on the LP materials and on the WIN were similar. Performances at 8 dB S/N were the same with the traditional fixed level presentation and the descending presentation level paradigms. The results from Experiment 2 demonstrated that the four list pairs of R-SPIN materials produced good first approximation psychometric functions over the –4 to 23 dB S/N range, but there were irregularities. The data from Experiment 2 were used in Experiment 3 to guide the selection of the words to be used at the various SNRs that would provide homogeneous performances at each SNR and would produce systematic psychometric functions. In Experiment 3, the 50% points were in good agreement for the LP and HP conditions within both groups of listeners. The psychometric functions for List Pairs 1 and 2, 3 and 4, and 5 and 6 had similar characteristics and maintained reasonable separations between the HP and LP functions, whereas the HP and LP functions for List Pair 7 and 8 bisected one another at the lower SNRs. Conclusions: This study indicates that the R-SPIN can be configured into a multiple SNR paradigm. A more in-depth study with the R-SPIN materials is needed to develop lists that are systematic and reasonably equivalent for use on listeners with hearing loss. The approach should be based on the psychometric characteristics of the 200 HP and 200 LP sentences with the current R-SPIN lists discarded. Of importance is maintaining the synchrony between the sentences and their accompanying babble.


2008 ◽  
Vol 19 (06) ◽  
pp. 496-506 ◽  
Author(s):  
Richard H. Wilson ◽  
Rachel McArdle ◽  
Heidi Roberts

Background: So that portions of the classic Miller, Heise, and Lichten (1951) study could be replicated, new recorded versions of the words and digits were made because none of the three common monosyllabic word lists (PAL PB-50, CID W-22, and NU–6) contained the 9 monosyllabic digits (1–10, excluding 7) that were used by Miller et al. It is well established that different psychometric characteristics have been observed for different lists and even for the same materials spoken by different speakers. The decision was made to record four lists of each of the three monosyllabic word sets, the monosyllabic digits not included in the three sets of word lists, and the CID W-1 spondaic words. A professional female speaker with a General American dialect recorded the materials during four recording sessions within a 2-week interval. The recording order of the 582 words was random. Purpose: To determine—on listeners with normal hearing—the psychometric properties of the five speech materials presented in speech-spectrum noise. Research Design: A quasi-experimental, repeated-measures design was used. Study Sample: Twenty-four young adult listeners (M = 23 years) with normal pure-tone thresholds (≤20-dB HL at 250 to 8000 Hz) participated. The participants were university students who were unfamiliar with the test materials. Data Collection and Analysis: The 582 words were presented at four signal-to-noise ratios (SNRs; −7-, −2-, 3-, and 8-dB) in speech-spectrum noise fixed at 72-dB SPL. Although the main metric of interest was the 50% point on the function for each word established with the Spearman-Kärber equation (Finney, 1952), the percentage correct on each word at each SNR was evaluated. The psychometric characteristics of the PB-50, CID W-22, and NU–6 monosyllabic word lists were compared with one another, with the CID W-1 spondaic words, and with the 9 monosyllabic digits. Results: Recognition performance on the four lists within each of the three monosyllabic word materials were equivalent, ±0.4 dB. Likewise, word-recognition performance on the PB-50, W-22, and NU–6 word lists were equivalent, ±0.2 dB. The mean recognition performance at the 50% point with the 36 W-1 spondaic words was ˜6.2 dB lower than the 50% point with the monosyllabic words. Recognition performance on the monosyllabic digits was 1–2 dB better than mean performance on the monosyllabic words. Conclusions: Word-recognition performances on the three sets of materials (PB-50, CID W-22, and NU–6) were equivalent, as were the performances on the four lists that make up each of the three materials. Phonetic/phonemic balance does not appear to be an important consideration in the compilation of word-recognition lists used to evaluate the ability of listeners to understand speech.A companion paper examines the acoustic, phonetic/phonological, and lexical variables that may predict the relative ease or difficulty for which these monosyllable words were recognized in noise (McArdle and Wilson, this issue).


Author(s):  
Jawahar Antony P ◽  
Animesh Barman

Background and Aim: Auditory stream segre­gation is a phenomenon that splits sounds into different streams. The temporal cues that contri­bute for stream segregation have been previ­ously studied in normal hearing people. In peo­ple with sensorineural hearing loss (SNHL), the cues for temporal envelope coding is not usually affected, while the temporal fine structure cues are affected. These two temporal cues depend on the amplitude modulation frequency. The present study aimed to evaluate the effect of sin­usoidal amplitude modulated (SAM) broadband noises on stream segregation in individuals with SNHL. Methods: Thirty normal hearing subjects and 30 subjects with mild to moderate bilateral SNHL participated in the study. Two experi­ments were performed; in the first experiment, the AB sequence of broadband SAM stimuli was presented, while in the second experiment, only B sequence was presented. A low (16 Hz) and a high (256 kHz) standard modulation fre­quency were used in these experiments. The subjects were asked to find the irregularities in the rhythmic sequence. Results: Both the study groups could identify the irregularities similarly in both the experi­ments. The minimum cumulative delay was sli­ghtly higher in the SNHL group. Conclusion: It is suggested that the temporal cues provided by the broadband SAM noises for low and high standard modulation frequencies were not used for stream segregation by either normal hearing subjects or those with SNHL. Keywords: Stream segregation; sinusoidal amplitude modulation; sensorineural hearing loss


2015 ◽  
Vol 24 (4) ◽  
pp. 477-486 ◽  
Author(s):  
Douglas P. Sladen ◽  
Todd. A. Ricketts

Purpose Several studies have been devoted to understanding the frequency information available to adult users of cochlear implants when listening in quiet. The objective of this study was to construct frequency importance functions for a group of adults with cochlear implants and a group of adults with normal hearing both in quiet and in a +10 dB signal-to-noise ratio. Method Two groups of adults, 1 with cochlear implants and 1 with normal hearing, were asked to identify nonsense syllables in quiet and in the presence of 6-talker babble while “holes” were systematically created in the speech spectrum. Frequency importance functions were constructed. Results Results showed that adults with normal hearing placed greater weight on bands 1, 3, and 4 than on bands 2, 5, and 6, whereas adults with cochlear implants placed equal weight on all bands. The frequency importance functions for each group did not differ between listening in quiet and listening in noise. Conclusions Adults with cochlear implants assign perceptual weight toward different frequency bands, though the weight assignment does not differ between quiet and noisy conditions. Generalizing these results to the broader population of adults with implants is constrained by a small sample size.


1998 ◽  
Vol 41 (3) ◽  
pp. 549-563 ◽  
Author(s):  
Sid P. Bacon ◽  
Jane M. Opie ◽  
Danielle Y. Montoya

Speech recognition was measured in three groups of listeners: those with sensorineural hearing loss of (presumably) cochlear origin (HL), those with normal hearing (NH), and those with normal hearing who listened in the presence of a spectrally shaped noise that elevated their pure-tone thresholds to match those of individual listeners in the HL group (NM). Performance was measured in four backgrounds that differed only in their temporal envelope: steady-state (SS) speech-shaped noise, speech-shaped noise modulated by the envelope of multi-talker babble (MT), speech-shaped noise modulated by the envelope of single-talker speech (ST), and speech-shaped noise modulated by a 10-Hz square wave (SQ). Threshold signal-to-noise ratios (SNRs) were typically best in the ST and especially the SQ conditions, indicating a masking release in those modulated backgrounds. SNRs in the SS and MT conditions were essentially identical to one another. The masking release was largest in the listeners in the NH group, and it tended to decrease as hearing loss increased. In 5 of the 11 listeners in the HL group, the masking release was nearly identical to that obtained in the NM group matched to those listeners; in the other 6 listeners, the release was smaller than that in the NM group. The reduced masking release was simulated best in those HL listeners for whom the masking release was relatively large. These results suggest that reduced masking release for speech in listeners with sensorineural hearing loss can only sometimes be accounted for entirely by reduced audibility.


2011 ◽  
Vol 22 (07) ◽  
pp. 393-404 ◽  
Author(s):  
Elizabeth D. Leigh-Paffenroth ◽  
Saravanan Elangovan

Background: Hearing loss and age interfere with the auditory system's ability to process temporal changes in the acoustic signal. A key unresolved question is whether high-frequency sensorineural hearing loss (HFSNHL) affects temporal processing in the low-frequency region where hearing loss is minimal or nonexistent. A second unresolved question is whether changes in hearing occur in middle-aged subjects in the absence of HFSNHL. Purpose: The purpose of this study was twofold: (1) to examine the influence of HFSNHL and aging on the auditory temporal processing abilities of low-frequency auditory channels with normal hearing sensitivity and (2) to examine the relations among gap detection measures, self-assessment reports of understanding speech, and functional measures of speech perception in middle-aged individuals with and without HFSNHL. Research Design: The subject groups were matched for either age (middle age) or pure-tone sensitivity (with or without hearing loss) to study the effects of age and HFSNHL on behavioral and functional measures of temporal processing and word recognition performance. These effects were analyzed by individual repeated-measures analyses of variance. Post hoc analyses were performed for each significant main effect and interaction. The relationships among the measures were analyzed with Pearson correlations. Study Sample: Eleven normal-hearing young adults (YNH), eight normal-hearing middle-aged adults (MANH), and nine middle-aged adults with HFSNHL were recruited for this study. Normal hearing sensitivity was defined as pure-tone thresholds ≤25 dB HL for octave frequencies from 250 to 8000 Hz. HFSNHL was defined as pure-tone thresholds ≤25 dB HL from 250 to 2000 Hz and ≥35 dB HL from 3000 to 8000 Hz. Data Collection and Analysis: Gap detection thresholds (GDTs) were measured under within-channel and between-channel conditions with the stimulus spectrum limited to regions of normal hearing sensitivity for the HFSNHL group (i.e., <2000 Hz). Self-perceived hearing problems were measured by a questionnaire (Abbreviated Profile of Hearing Aid Benefit), and word recognition performance was assessed under four conditions: quiet and babble, with and without low-pass filtering (cutoff frequency = 2000 Hz). Results: The effects of HFSNHL and age were found for gap detection, self-perceived hearing problems, and word recognition in noise. The presence of HFSNHL significantly increased GDTs for stimuli presented in regions of normal pure-tone sensitivity. In addition, middle-aged subjects with normal hearing sensitivity reported significantly more problems hearing in background noise than the young normal-hearing subjects. Significant relationships between self-report measures of hearing ability in background noise and word recognition in babble were found. Conclusions: The conclusions from the present study are twofold: (1) HFSNHL may have an off-channel impact on auditory temporal processing, and (2) presenescent changes in the auditory system of MANH subjects increased self-perceived problems hearing in background noise and decreased functional performance in background noise compared with YNH subjects.


2006 ◽  
Vol 17 (03) ◽  
pp. 157-167 ◽  
Author(s):  
Rachel A. McArdle ◽  
Richard H. Wilson

The purpose of this study was to determine the list equivalency of the 18 QuickSIN™ (Quick Speech in Noise test) lists. Individuals with normal hearing (n = 24) and with sensorineural hearing loss (n = 72) were studied. Mean recognition performances on the 18 lists by the listeners with normal hearing were 2.8 to 4.3 dB SNR (signal-to-noise ratio), whereas the range was 10.0 to 14.3 dB SNR for the listeners with hearing loss. The psychometric functions for each list showed high performance variability across lists for listeners with hearing loss but not for listeners with normal hearing. For listeners with hearing loss, Lists 4, 5, 13, and 16 fell outside of the critical difference. The data from this study suggest nine lists that provide homogenous results for listeners with and without hearing loss. Finally, there was an 8.7 dB difference in performances between the two groups indicating a more favorable signal-to-noise ratio required by the listeners with hearing loss to obtain equal performance.


Sign in / Sign up

Export Citation Format

Share Document