scholarly journals The Effect of Stimulus Audibility on the Relationship between Pure-Tone Average and Speech Recognition in Noise Ability

2020 ◽  
Vol 31 (03) ◽  
pp. 224-232
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Xiangming Fang

AbstractThe literature presents conflicting reports on the relationship between pure-tone threshold average and speech recognition in noise ability.The purpose of this retrospective study and meta-analysis was to determine the effect of stimulus audibility on the relationship between speech recognition in noise ability and bilateral pure-tone average (BPTA).Pure-tone threshold and Hearing in Noise Test (HINT) data from two data sets were evaluated. The HINT data from both data sets were divided into groups with complete and partial audibility of the HINT stimuli delivered at 65 dBA.Normal and hearing-impaired participants were included in this retrospective study. For data set 1 (n = 215), a relatively weak relationship had been found between HINT thresholds and BPTA. For data set 2 (n = 55), a relatively strong relationship had been found between HINT thresholds and BPTA. For data set 1, only 10% of the participants had partial audibility of the HINT stimuli. For data set 2, 16% of the participants had partial audibility of the HINT stimuli.Pure-tone thresholds and HINT data were obtained from published and unpublished studies. HINT data were collected in a simulated soundfield environment under headphones using the standard HINT protocol. Statistical analyses included descriptive statistics, correlations, and a two-way analysis of variance (ANOVA), and multiple regression.A two-way ANOVA followed by post hoc analyses revealed a greater difference between the data sets for the Noise Front thresholds obtained with partial rather than complete audibility of the stimuli. A weak and nonsignificant relationship was found between BPTA(0.5, 1.0, 2.0, 3.0, 6.0 kHz) versus HINT Noise Front thresholds for complete audibility data (r = 0.060, p = 0.356) and a strong relationship was found for the partial audibility data (r = 0.863, p < 0.001).The proportion of partial audibility data in a given data set may influence the relative strength of the relationship between BPTA and HINT Noise Front thresholds. This brings into question the convention of using pure-tone average as a predictor of speech recognition in noise ability.

2012 ◽  
Vol 23 (10) ◽  
pp. 779-788 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Daniel J. Freed ◽  
Laurel M. Fisher

Background: Speech recognition in noise testing has been conducted at least since the 1940s (Dickson et al, 1946). The ability to recognize speech in noise is a distinct function of the auditory system (Plomp, 1978). According to Kochkin (2002), difficulty recognizing speech in noise is the primary complaint of hearing aid users. However, speech recognition in noise testing has not found widespread use in the field of audiology (Mueller, 2003; Strom, 2003; Tannenbaum and Rosenfeld, 1996). The audiogram has been used as the “gold standard” for hearing ability. However, the audiogram is a poor indicator of speech recognition in noise ability. Purpose: This study investigates the relationship between pure-tone thresholds, the articulation index, and the ability to recognize speech in quiet and in noise. Research Design: Pure-tone thresholds were measured for audiometric frequencies 250–6000 Hz. Pure-tone threshold groups were created. These included a normal threshold group and slight, mild, severe, and profound high-frequency pure-tone threshold groups. Speech recognition thresholds in quiet and in noise were obtained using the Hearing in Noise Test (HINT) (Nilsson et al, 1994; Vermiglio, 2008). The articulation index was determined by using Pavlovic's method with pure-tone thresholds (Pavlovic, 1989, 1991). Study Sample: Two hundred seventy-eight participants were tested. All participants were native speakers of American English. Sixty-three of the original participants were removed in order to create groups of participants with normal low-frequency pure-tone thresholds and relatively symmetrical high-frequency pure-tone threshold groups. The final set of 215 participants had a mean age of 33 yr with a range of 17–59 yr. Data Collection and Analysis: Pure-tone threshold data were collected using the Hughson-Weslake procedure. Speech recognition data were collected using a Windows-based HINT software system. Statistical analyses were conducted using descriptive, correlational, and multivariate analysis of covariance (MANCOVA) statistics. Results: The MANCOVA analysis (where the effect of age was statistically removed) indicated that there were no significant differences in HINT performances between groups of participants with normal audiograms and those groups with slight, mild, moderate, or severe high-frequency hearing losses. With all of the data combined across groups, correlational analyses revealed significant correlations between pure-tone averages and speech recognition in quiet performance. Nonsignificant or significant but weak correlations were found between pure-tone averages and HINT thresholds. Conclusions: The ability to recognize speech in steady-state noise cannot be predicted from the audiogram. A new classification scheme of hearing impairment based on the audiogram and the speech reception in noise thresholds, as measured with the HINT, may be useful for the characterization of the hearing ability in the global sense. This classification scheme is consistent with Plomp's two aspects of hearing ability (Plomp, 1978).


2008 ◽  
Vol 19 (07) ◽  
pp. 548-556 ◽  
Author(s):  
Richard H. Wilson ◽  
Wendy B. Cates

Background: The Speech Recognition in Noise Test (SPRINT) is a word-recognition instrument that presents the 200 Northwestern University Auditory Test No. 6 (NU-6) words binaurally at 50 dB HL in a multitalker babble at a 9 dB signal-to-noise ratio (S/N) (Cord et al, 1992). The SPRINT was developed by and used by the Army as a more valid predictor of communication abilities (than pure-tone thresholds or word-recognition in quiet) for issues involving fitness for duty from a hearing perspective of Army personnel. The Words-in-Noise test (WIN) is a slightly different word-recognition task in a fixed level multitalker babble with 10 NU-6 words presented at each of 7 S/N from 24 to 0 dB S/N in 4 dB decrements (Wilson, 2003; Wilson and McArdle, 2007). For the two instruments, both the babble and the speakers of the words are different. The SPRINT uses all 200 NU-6 words, whereas the WIN uses a maximum of 70 words. Purpose: The purpose was to compare recognition performances by 24 young listeners with normal hearing and 48 older listeners with sensorineural hearing on the SPRINT and WIN protocols. Research Design: A quasi-experimental, mixed model design was used. Study Sample: The 24 young listeners with normal hearing (19 to 29 years, mean = 23.3 years) were from the local university and had normal hearing (≤20 dB HL; American National Standards Institute, 2004) at the 250–8000 Hz octave intervals. The 48 older listeners with sensorineural hearing loss (60 to 82 years, mean = 69.9 years) had the following inclusion criteria: (1) a threshold at 500 Hz between 15 and 30 dB HL, (2) a threshold at 1000 Hz between 20 and 40 dB HL, (3) a three-frequency pure-tone average (500, 1000, and 2000 Hz) of ≤40 dB HL, (4) word-recognition scores in quiet ≥40%, and (5) no history of middle ear or retrocochlear pathology as determined by an audiologic evaluation. Data Collection and Analysis: The speech materials were presented bilaterally in the following order: (1) the SPRINT at 50 dB HL, (2) two half lists of NU-6 words in quiet at 60 dB HL and 80 dB HL, and (3) the two 35-word lists of the WIN materials with the multitalker babble fixed at 60 dB HL. Data collection occurred during a 40–60 minute session. Recognition performances on each stimulus word were analyzed. Results: The listeners with normal hearing obtained 92.5% correct on the SPRINT with a 50% point on the WIN of 2.7 dB S/N. The listeners with hearing loss obtained 65.3% correct on the SPRINT and a WIN 50% point at 12.0 dB S/N. The SPRINT and WIN were significantly correlated (r = −0.81, p < .01), indicating that the SPRINT had good concurrent validity. The high-frequency, pure-tone average (1000, 2000, 4000 Hz) had higher correlations with the SPRINT, WIN, and NU-6 in quiet than did the traditional three-frequency pure-tone average (500, 1000, 2000 Hz). Conclusions: Graphically and numerically the SPRINT and WIN were highly related, which is indicative of good concurrent validity of the SPRINT.


2018 ◽  
Vol 29 (03) ◽  
pp. 206-222 ◽  
Author(s):  
Andrew J. Vermiglio ◽  
Sigfrid D. Soli ◽  
Xiangming Fang

AbstractThe primary components of a diagnostic accuracy study are an index test, the target condition (or disorder), and a reference standard. According to the Standards for Reporting Diagnostic Accuracy statement, the reference standard should be the best method available to independently determine if the results of an index test are correct. Pure-tone thresholds have been used as the “gold standard” for the validation of some tests used in audiology. Many studies, however, have shown a lack of agreement between the audiogram and the patient’s perception of hearing ability. For example, patients with normal audiograms may report difficulty understanding speech in the presence of background noise.The primary purpose of this article is to present an argument for the use of self-report as a reference standard for diagnostic studies in the field of audiology. This will be in the form of a literature review on pure-tone threshold measures and self-report as reference standards. The secondary purpose is to determine the diagnostic accuracy of pure-tone threshold and Hearing-in-Noise Test (HINT) measures for the detection of a speech-recognition-in-noise disorder.Two groups of participants with normal pure-tone thresholds were evaluated. The King–Kopetzky syndrome (KKS) group was made up of participants with the self-report of speech-recognition-in-noise difficulties. The control group was made up of participants with no reports of speech-recognition-in-noise problems. The reference standard was self-report. Diagnostic accuracy of HINT and pure-tone threshold measures was determined by measuring group differences, sensitivity and specificity, and the area under the curve (AUC) for receiver-operating characteristic (ROC) curves.Forty-seven participants were tested. All participants were native speakers of American English. Twenty-two participants were in the control group and 25 in the KKS group. The groups were matched for age.Pure-tone threshold data were collected using the Hughson–Westlake procedure. Speech-recognition-in-noise data was collected using a software system and the standard HINT protocol. Statistical analyses were conducted using descriptive, correlational, two-sample t tests, and logistic regression.The literature review revealed that self-report has been used as a reference standard in investigations of patients with normal audiograms and the perception of difficulty understanding speech in the presence of background noise. Self-report may be a better indicator of hearing ability than pure-tone thresholds in some situations. The diagnostic accuracy investigation revealed statistically significant differences between control and KKS groups for HINT performance (p < 0.01), but not for pure-tone threshold measures. Better sensitivity was found for the HINT Composite score (88%) than pure-tone average (PTA; 28%). The specificities for the HINT Composite score and PTA were 77% and 95%, respectively. ROC curves revealed a greater AUC for the HINT Composite score (AUC = 0.87) than for PTA (AUC = 0.51).Self-report is a reasonable reference standard for studies on the diagnostic accuracy of speech-recognition-in-noise tests. For individuals with normal pure-tone thresholds, the HINT demonstrated a higher degree of diagnostic accuracy than pure-tone thresholds for the detection of speech-recognition-in-noise disorder.


2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 218-219
Author(s):  
Andres Fernando T Russi ◽  
Mike D Tokach ◽  
Jason C Woodworth ◽  
Joel M DeRouchey ◽  
Robert D Goodband ◽  
...  

Abstract The swine industry has been constantly evolving to select animals with improved performance traits and to minimize variation in body weight (BW) in order to meet packer specifications. Therefore, understanding variation presents an opportunity for producers to find strategies that could help reduce, manage, or deal with variation of pigs in a barn. A systematic review and meta-analysis was conducted by collecting data from multiple studies and available data sets in order to develop prediction equations for coefficient of variation (CV) and standard deviation (SD) as a function of BW. Information regarding BW variation from 16 papers was recorded to provide approximately 204 data points. Together, these data included 117,268 individually weighed pigs with a sample size that ranged from 104 to 4,108 pigs. A random-effects model with study used as a random effect was developed. Observations were weighted using sample size as an estimate for precision on the analysis, where larger data sets accounted for increased accuracy in the model. Regression equations were developed using the nlme package of R to determine the relationship between BW and its variation. Polynomial regression analysis was conducted separately for each variation measurement. When CV was reported in the data set, SD was calculated and vice versa. The resulting prediction equations were: CV (%) = 20.04 – 0.135 × (BW) + 0.00043 × (BW)2, R2=0.79; SD = 0.41 + 0.150 × (BW) - 0.00041 × (BW)2, R2 = 0.95. These equations suggest that there is evidence for a decreasing quadratic relationship between mean CV of a population and BW of pigs whereby the rate of decrease is smaller as mean pig BW increases from birth to market. Conversely, the rate of increase of SD of a population of pigs is smaller as mean pig BW increases from birth to market.


2007 ◽  
Vol 18 (07) ◽  
pp. 604-617 ◽  
Author(s):  
Thomas Lunner ◽  
Elisabet Sundewall-Thorén

This study which included 23 experienced hearing aid users replicated several of the experiments reported in Gatehouse et al (2003, 2006) with new speech test material, language, and test procedure. The performance measure used was SNR required for 80% correct words in a sentence test. Consistent with Gatehouse et al, this study indicated that subjects showing a low score in a cognitive test (visual letter monitoring) performed better in the speech recognition test with slow time constants than with fast time constants, and performed better in unmodulated noise than in modulated noise, while subjects with high scores on the cognitive test showed the opposite pattern. Furthermore, cognitive test scores were significantly correlated with the differential advantage of fast-acting versus slow-acting compression in conditions of modulated noise.The pure tone average threshold explained 30% of the variance in aided speech recognition in noise under relatively simple listening conditions, while cognitive test scores explained about 40% of the variance under more complex, fluctuating listening conditions, where the pure tone average explained less than 5% of the variance. This suggests that speech recognition under steady-state noise conditions may underestimate the role of cognition in real-life listening.


2017 ◽  
Vol 3 (5) ◽  
pp. e192 ◽  
Author(s):  
Corina Anastasaki ◽  
Stephanie M. Morris ◽  
Feng Gao ◽  
David H. Gutmann

Objective:To ascertain the relationship between the germline NF1 gene mutation and glioma development in patients with neurofibromatosis type 1 (NF1).Methods:The relationship between the type and location of the germline NF1 mutation and the presence of a glioma was analyzed in 37 participants with NF1 from one institution (Washington University School of Medicine [WUSM]) with a clinical diagnosis of NF1. Odds ratios (ORs) were calculated using both unadjusted and weighted analyses of this data set in combination with 4 previously published data sets.Results:While no statistical significance was observed between the location and type of the NF1 mutation and glioma in the WUSM cohort, power calculations revealed that a sample size of 307 participants would be required to determine the predictive value of the position or type of the NF1 gene mutation. Combining our data set with 4 previously published data sets (n = 310), children with glioma were found to be more likely to harbor 5′-end gene mutations (OR = 2; p = 0.006). Moreover, while not clinically predictive due to insufficient sensitivity and specificity, this association with glioma was stronger for participants with 5′-end truncating (OR = 2.32; p = 0.005) or 5′-end nonsense (OR = 3.93; p = 0.005) mutations relative to those without glioma.Conclusions:Individuals with NF1 and glioma are more likely to harbor nonsense mutations in the 5′ end of the NF1 gene, suggesting that the NF1 mutation may be one predictive factor for glioma in this at-risk population.


2012 ◽  
Vol 263-266 ◽  
pp. 2173-2178
Author(s):  
Xin Guang Li ◽  
Min Feng Yao ◽  
Li Rui Jian ◽  
Zhen Jiang Li

A probabilistic neural network (PNN) speech recognition model based on the partition clustering algorithm is proposed in this paper. The most important advantage of PNN is that training is easy and instantaneous. Therefore, PNN is capable of dealing with real time speech recognition. Besides, in order to increase the performance of PNN, the selection of data set is one of the most important issues. In this paper, using the partition clustering algorithm to select data is proposed. The proposed model is tested on two data sets from the field of spoken Arabic numbers, with promising results. The performance of the proposed model is compared to single back propagation neural network and integrated back propagation neural network. The final comparison result shows that the proposed model performs better than the other two neural networks, and has an accuracy rate of 92.41%.


2005 ◽  
Vol 2005 (1) ◽  
pp. 143-147
Author(s):  
Daniel R. Norton

ABSTRACT The annual volume of oil spilled into the marine environment by tank vessels (tank barges and tanks hips) is analyzed against the total annual volume of oil transported by tank vessels in order to determine any correlational relationship. U.S. Coast Guard data was used to provide the volume of oil (petroleum) spilled into the marine environment each year by tank vessels. Data from the U.S. Army Corps of Engineers and the U.S. Department of Transportation's (US DOT) National Transportation Statistics (NTS) were used for the annual volume of oil transported via tank vessels in the United States. This data is provided in the form of tonnage and ton-miles, respectively. Each data set has inherent benefits and weaknesses. For the analysis the volume of oil transported was used as the explanatory variable (x) and the volume of oil spilled into the marine environment as the response variable (y). Both data sets were tested for correlation. A weak relationship, r = −0.38 was found using tonnage, and no further analysis was performed. A moderately strong relationship, r = 0.79, was found using ton-miles. Further analysis using regression analysis and a plot of residuals showed the data to be satisfactory with no sign of lurking variables, but with the year 1990 being a possible outlier.


Sign in / Sign up

Export Citation Format

Share Document