The Relationship between High-Frequency Pure-Tone Hearing Loss, Hearing in Noise Test (HINT) Thresholds, and the Articulation Index

2012 ◽  
Vol 23 (10) ◽  
pp. 779-788 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Laurel M. Fisher

Background: Speech recognition in noise testing has been conducted at least since the 1940s (Dickson et al, 1946). The ability to recognize speech in noise is a distinct function of the auditory system (Plomp, 1978). According to Kochkin (2002), difficulty recognizing speech in noise is the primary complaint of hearing aid users. However, speech recognition in noise testing has not found widespread use in the field of audiology (Mueller, 2003; Strom, 2003; Tannenbaum and Rosenfeld, 1996). The audiogram has been used as the “gold standard” for hearing ability. However, the audiogram is a poor indicator of speech recognition in noise ability. Purpose: This study investigates the relationship between pure-tone thresholds, the articulation index, and the ability to recognize speech in quiet and in noise. Research Design: Pure-tone thresholds were measured for audiometric frequencies 250–6000 Hz. Pure-tone threshold groups were created. These included a normal threshold group and slight, mild, severe, and profound high-frequency pure-tone threshold groups. Speech recognition thresholds in quiet and in noise were obtained using the Hearing in Noise Test (HINT) (Nilsson et al, 1994; Vermiglio, 2008). The articulation index was determined by using Pavlovic's method with pure-tone thresholds (Pavlovic, 1989, 1991). Study Sample: Two hundred seventy-eight participants were tested. All participants were native speakers of American English. Sixty-three of the original participants were removed in order to create groups of participants with normal low-frequency pure-tone thresholds and relatively symmetrical high-frequency pure-tone threshold groups. The final set of 215 participants had a mean age of 33 yr with a range of 17–59 yr. Data Collection and Analysis: Pure-tone threshold data were collected using the Hughson-Weslake procedure. Speech recognition data were collected using a Windows-based HINT software system. Statistical analyses were conducted using descriptive, correlational, and multivariate analysis of covariance (MANCOVA) statistics. Results: The MANCOVA analysis (where the effect of age was statistically removed) indicated that there were no significant differences in HINT performances between groups of participants with normal audiograms and those groups with slight, mild, moderate, or severe high-frequency hearing losses. With all of the data combined across groups, correlational analyses revealed significant correlations between pure-tone averages and speech recognition in quiet performance. Nonsignificant or significant but weak correlations were found between pure-tone averages and HINT thresholds. Conclusions: The ability to recognize speech in steady-state noise cannot be predicted from the audiogram. A new classification scheme of hearing impairment based on the audiogram and the speech reception in noise thresholds, as measured with the HINT, may be useful for the characterization of the hearing ability in the global sense. This classification scheme is consistent with Plomp's two aspects of hearing ability (Plomp, 1978).

2020 ◽  
Vol 31 (03) ◽  
pp. 224-232
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Xiangming Fang

AbstractThe literature presents conflicting reports on the relationship between pure-tone threshold average and speech recognition in noise ability.The purpose of this retrospective study and meta-analysis was to determine the effect of stimulus audibility on the relationship between speech recognition in noise ability and bilateral pure-tone average (BPTA).Pure-tone threshold and Hearing in Noise Test (HINT) data from two data sets were evaluated. The HINT data from both data sets were divided into groups with complete and partial audibility of the HINT stimuli delivered at 65 dBA.Normal and hearing-impaired participants were included in this retrospective study. For data set 1 (n = 215), a relatively weak relationship had been found between HINT thresholds and BPTA. For data set 2 (n = 55), a relatively strong relationship had been found between HINT thresholds and BPTA. For data set 1, only 10% of the participants had partial audibility of the HINT stimuli. For data set 2, 16% of the participants had partial audibility of the HINT stimuli.Pure-tone thresholds and HINT data were obtained from published and unpublished studies. HINT data were collected in a simulated soundfield environment under headphones using the standard HINT protocol. Statistical analyses included descriptive statistics, correlations, and a two-way analysis of variance (ANOVA), and multiple regression.A two-way ANOVA followed by post hoc analyses revealed a greater difference between the data sets for the Noise Front thresholds obtained with partial rather than complete audibility of the stimuli. A weak and nonsignificant relationship was found between BPTA(0.5, 1.0, 2.0, 3.0, 6.0 kHz) versus HINT Noise Front thresholds for complete audibility data (r = 0.060, p = 0.356) and a strong relationship was found for the partial audibility data (r = 0.863, p < 0.001).The proportion of partial audibility data in a given data set may influence the relative strength of the relationship between BPTA and HINT Noise Front thresholds. This brings into question the convention of using pure-tone average as a predictor of speech recognition in noise ability.


2018 ◽  
Vol 29 (10) ◽  
pp. 948-954 ◽  
Author(s):  
Paige Heeke ◽  
Andrew J. Vermiglio ◽  
Emery Bulla ◽  
Keerthana Velappan ◽  
Xiangming Fang

AbstractTemporal acoustic cues are particularly important for speech understanding, and past research has inferred a relationship between temporal resolution and speech recognition in noise ability. A temporal resolution disorder is thought to affect speech understanding abilities because persons would not be able to accurately encode these frequency transitions, creating speech discrimination errors even in the presence of normal pure-tone hearing.The primary purpose was to investigate the relationship between temporal resolution as measured by the Random Gap Detection Test (RGDT) and speech recognition in noise performance as measured by the Hearing in Noise Test (HINT) in adults with normal audiometric thresholds. The second purpose was to examine the relationship between temporal resolution and spatial release from masking.The HINT and RGDT protocols were administered under headphones according to the guidelines specified by the developers. The HINT uses an adaptive protocol to determine the signal-to-noise ratio where the participant recognizes 50% of the sentences. For HINT conditions, the target sentences were presented at 0° and the steady-state speech-shaped noise and a four-talker babble (4TB) was presented at 0°, +90°, or −90° for noise front, noise right, and noise left conditions, respectively. The RGDT is used to evaluate temporal resolution by determining the smallest time interval between two matching stimuli that can be detected by the participant. The RGDT threshold is the shortest time interval where the participant detects a gap. Tonal (0.5, 1, 2, and 4 kHz) and click stimuli random gap subtests were presented at 60 dB HL. Tonal subtests were presented in a random order to minimize presentation order effects.Twenty-one young, native English-speaking participants with normal pure-tone thresholds (≤25 dB HL for 500–4000 Hz) participated in this study. The average age of the participants was 20.2 years (SD = 0.66).Spearman rho correlation coefficients were conducted using SPSS 22 (IBM Corp., Armonk, NY) to determine the relationships between HINT and RGDT thresholds and derived measures (spatial advantage and composite scores). Nonparametric testing was used because of the ordinal nature of RGDT data.Moderate negative correlations (p < 0.05) were found between eight RGDT and HINT threshold measures and a moderate positive correlation (p < 0.05) was found between RGDT click thresholds and HINT 4TB spatial advantage. This suggests that as temporal resolution abilities worsened, speech recognition in noise performance improved. These correlations were not statistically significant after the p value reflected the Bonferroni correction for multiple comparisons.The results of the present study imply that the RGDT and HINT use different temporal processes. Performance on the RGDT cannot be predicted from HINT thresholds or vice versa.


2018 ◽  
Vol 29 (03) ◽  
pp. 206-222 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Xiangming Fang

AbstractThe primary components of a diagnostic accuracy study are an index test, the target condition (or disorder), and a reference standard. According to the Standards for Reporting Diagnostic Accuracy statement, the reference standard should be the best method available to independently determine if the results of an index test are correct. Pure-tone thresholds have been used as the “gold standard” for the validation of some tests used in audiology. Many studies, however, have shown a lack of agreement between the audiogram and the patient’s perception of hearing ability. For example, patients with normal audiograms may report difficulty understanding speech in the presence of background noise.The primary purpose of this article is to present an argument for the use of self-report as a reference standard for diagnostic studies in the field of audiology. This will be in the form of a literature review on pure-tone threshold measures and self-report as reference standards. The secondary purpose is to determine the diagnostic accuracy of pure-tone threshold and Hearing-in-Noise Test (HINT) measures for the detection of a speech-recognition-in-noise disorder.Two groups of participants with normal pure-tone thresholds were evaluated. The King–Kopetzky syndrome (KKS) group was made up of participants with the self-report of speech-recognition-in-noise difficulties. The control group was made up of participants with no reports of speech-recognition-in-noise problems. The reference standard was self-report. Diagnostic accuracy of HINT and pure-tone threshold measures was determined by measuring group differences, sensitivity and specificity, and the area under the curve (AUC) for receiver-operating characteristic (ROC) curves.Forty-seven participants were tested. All participants were native speakers of American English. Twenty-two participants were in the control group and 25 in the KKS group. The groups were matched for age.Pure-tone threshold data were collected using the Hughson–Westlake procedure. Speech-recognition-in-noise data was collected using a software system and the standard HINT protocol. Statistical analyses were conducted using descriptive, correlational, two-sample t tests, and logistic regression.The literature review revealed that self-report has been used as a reference standard in investigations of patients with normal audiograms and the perception of difficulty understanding speech in the presence of background noise. Self-report may be a better indicator of hearing ability than pure-tone thresholds in some situations. The diagnostic accuracy investigation revealed statistically significant differences between control and KKS groups for HINT performance (p < 0.01), but not for pure-tone threshold measures. Better sensitivity was found for the HINT Composite score (88%) than pure-tone average (PTA; 28%). The specificities for the HINT Composite score and PTA were 77% and 95%, respectively. ROC curves revealed a greater AUC for the HINT Composite score (AUC = 0.87) than for PTA (AUC = 0.51).Self-report is a reasonable reference standard for studies on the diagnostic accuracy of speech-recognition-in-noise tests. For individuals with normal pure-tone thresholds, the HINT demonstrated a higher degree of diagnostic accuracy than pure-tone thresholds for the detection of speech-recognition-in-noise disorder.


2015 ◽  
Vol 26 (06) ◽  
pp. 540-546 ◽  
Author(s):  
Eric Hoover ◽  
Lauren Pasquesi ◽  
Pamela Souza

Background: Temporal resolution is important for speech recognition and may contribute to variability in speech recognition among patients. Clinical tests of temporal resolution are available, but it is not clear how closely results of those tests correspond to results of traditional temporal resolution tests. Purpose: The purpose of this study was to compare the Gaps-in-Noise (GIN) test to a traditional measure of gap detection. Study Sample: This study included older adults with hearing loss and younger adults with normal hearing. Data Collection and Analysis: Participants completed one practice and two test blocks of each gap detection test, and a measure of speech-in-noise recognition. Individual data were correlated to examine the relationship between the tests. Results: The GIN and traditional gap detection were significantly, but not highly correlated. The traditional gap detection test contributed to variance in speech recognition in noise, while the GIN did not. Conclusions: The brevity and ease of implementing the GIN in the clinic make it a viable test of temporal resolution. However, it differs from traditional measures in implementation, and as a result relies on different cognitive factors. The GIN thresholds should be interpreted carefully and not presumed to represent an approximation of traditional gap detection thresholds.


Author(s):  
Christina M. Roup ◽  
Amy Custer ◽  
Julie Powell

Purpose This study examined the relationship between self-perceived hearing abilities and binaural speech-in-noise performance in young to middle-age adults with normal pure-tone hearing. Method Sixty-six adults with normal hearing (thresholds ≤ 25 dB HL at 250–8000 Hz) participated. Self-perceived hearing abilities were assessed using the Adult Auditory Performance Scale (AAPS). The AAPS provides a single global score of self-perceived hearing abilities and individual subscale scores for six listening conditions, namely, Quiet, Ideal, Noise, Multiple Inputs, Auditory Memory, and Auditory Attention. Binaural speech-in-noise performance was measured with the Listening in Spatialized Noise–Sentences Test (LiSN-S). Results Results revealed significant correlations between the AAPS and the LiSN-S. Listeners who scored higher on the AAPS (greater self-perceived hearing difficulty) performed poorer on the LiSN-S. The strongest correlations were observed between the AAPS Noise subscale score and the LiSN-S low- and high-cue conditions. Age was significantly correlated with both pure-tone hearing and the LiSN-S spatial advantage, with older participants exhibiting poorer thresholds and smaller spatial advantages. Pure-tone hearing was also significantly correlated with binaural speech-in-noise performance. Listeners with poorer thresholds performed poorer across multiple LiSN-S conditions. Linear regression revealed that a significant amount of the variance in LiSN-S performance was accounted for by pure-tone hearing as well as the AAPS global score and Noise subscale score. Conclusions Results demonstrate a clear relationship between an individual's self-perceived hearing ability and their binaural speech-in-noise performance. In addition, minimal threshold elevation within the normal range and age (i.e., middle adulthood) had a negative impact on binaural speech-in-noise performance. The results support the inclusion of speech-in-noise testing for all patients, even those whose pure-tone hearing falls within the traditional normal range.


2021 ◽  
Vol 32 (08) ◽  
pp. 478-486
Author(s):  
Lisa G. Potts ◽  
Soo Jang ◽  
Cory L. Hillis

Abstract Background For cochlear implant (CI) recipients, speech recognition in noise is consistently poorer compared with recognition in quiet. Directional processing improves performance in noise and can be automatically activated based on acoustic scene analysis. The use of adaptive directionality with CI recipients is new and has not been investigated thoroughly, especially utilizing the recipients' preferred everyday signal processing, dynamic range, and/or noise reduction. Purpose This study utilized CI recipients' preferred everyday signal processing to evaluate four directional microphone options in a noisy environment to determine which option provides the best speech recognition in noise. A greater understanding of automatic directionality could ultimately improve CI recipients' speech-in-noise performance and better guide clinicians in programming. Study Sample Twenty-six unilateral and seven bilateral CI recipients with a mean age of 66 years and approximately 4 years of CI experience were included. Data Collection and Analysis Speech-in-noise performance was measured using eight loudspeakers in a 360-degree array with HINT sentences presented in restaurant noise. Four directional options were evaluated (automatic [SCAN], adaptive [Beam], fixed [Zoom], and Omni-directional) with participants' everyday use signal processing options active. A mixed-model analysis of variance (ANOVA) and pairwise comparisons were performed. Results Automatic directionality (SCAN) resulted in the best speech-in-noise performance, although not significantly better than Beam. Omni-directional performance was significantly poorer compared with the three other directional options. A varied number of participants performed their best with each of the four-directional options, with 16 performing best with automatic directionality. The majority of participants did not perform best with their everyday directional option. Conclusion The individual variability seen in this study suggests that CI recipients try with different directional options to find their ideal program. However, based on a CI recipient's motivation to try different programs, automatic directionality is an appropriate everyday processing option.


Author(s):  
Bruna S. Mussoi

Purpose Music training has been proposed as a possible tool for auditory training in older adults, as it may improve both auditory and cognitive skills. However, the evidence to support such benefits is mixed. The goal of this study was to determine the differential effects of lifelong musical training and working memory on speech recognition in noise, in older adults. Method A total of 31 musicians and nonmusicians aged 65–78 years took part in this cross-sectional study. Participants had a normal pure-tone average, with most having high-frequency hearing loss. Working memory (memory capacity) was assessed with the backward Digit Span test, and speech recognition in noise was assessed with three clinical tests (Quick Speech in Noise, Hearing in Noise Test, and Revised Speech Perception in Noise). Results Findings from this sample of older adults indicate that neither music training nor working memory was associated with differences on the speech recognition in noise measures used in this study. Similarly, duration of music training was not associated with speech-in-noise recognition. Conclusions Results from this study do not support the hypothesis that lifelong music training benefits speech recognition in noise. Similarly, an effect of working memory (memory capacity) was not apparent. While these findings may be related to the relatively small sample size, results across previous studies that investigated these effects have also been mixed. Prospective randomized music training studies may be able to better control for variability in outcomes associated with pre-existing and music training factors, as well as to examine the differential impact of music training and working memory for speech-in-noise recognition in older adults.


Sign in / Sign up

Export Citation Format

Share Document