Sensorineural Hearing Loss and Upward Spread of Masking

1970 ◽  
Vol 13 (2) ◽  
pp. 426-437 ◽  
Author(s):  
Ellen S. Martin ◽  
J. M. Pickett

Pure-tone auditory thresholds were obtained in quiet and in three levels of masking noise for one normal-hearing group and five groups of subjects with different degrees of sensorineural loss. The masker was a low-pass noise, cut off at 250 Hz. It was presented at overall levels of 77, 97, and 107 dB SPL. Pure-tone thresholds were obtained at test frequencies within and above the masking band. A measure of noise rejection slope was used to describe spread of masking. Degree of loss, configuration of loss, and level of masking noise appear to have marked influences on upward spread of masking patterns in sensorineural subjects.

1978 ◽  
Vol 21 (1) ◽  
pp. 5-36 ◽  
Author(s):  
Marilyn D. Wang ◽  
Charlotte M. Reed ◽  
Robert C. Bilger

It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six low-pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in a sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of the individual listener’s audiogram is given.


1998 ◽  
Vol 41 (3) ◽  
pp. 549-563 ◽  
Author(s):  
Sid P. Bacon ◽  
Jane M. Opie ◽  
Danielle Y. Montoya

Speech recognition was measured in three groups of listeners: those with sensorineural hearing loss of (presumably) cochlear origin (HL), those with normal hearing (NH), and those with normal hearing who listened in the presence of a spectrally shaped noise that elevated their pure-tone thresholds to match those of individual listeners in the HL group (NM). Performance was measured in four backgrounds that differed only in their temporal envelope: steady-state (SS) speech-shaped noise, speech-shaped noise modulated by the envelope of multi-talker babble (MT), speech-shaped noise modulated by the envelope of single-talker speech (ST), and speech-shaped noise modulated by a 10-Hz square wave (SQ). Threshold signal-to-noise ratios (SNRs) were typically best in the ST and especially the SQ conditions, indicating a masking release in those modulated backgrounds. SNRs in the SS and MT conditions were essentially identical to one another. The masking release was largest in the listeners in the NH group, and it tended to decrease as hearing loss increased. In 5 of the 11 listeners in the HL group, the masking release was nearly identical to that obtained in the NM group matched to those listeners; in the other 6 listeners, the release was smaller than that in the NM group. The reduced masking release was simulated best in those HL listeners for whom the masking release was relatively large. These results suggest that reduced masking release for speech in listeners with sensorineural hearing loss can only sometimes be accounted for entirely by reduced audibility.


2020 ◽  
pp. 1-15
Author(s):  
Garrett Cardon ◽  
Anu Sharma

Purpose Auditory threshold estimation using the auditory brainstem response or auditory steady state response is limited in some populations (e.g., individuals with auditory neuropathy spectrum disorder [ANSD] or those who have difficulty remaining still during testing and cannot tolerate general anesthetic). However, cortical auditory evoked potentials (CAEPs) can be recorded in many such patients and have been employed in threshold approximation. Thus, we studied CAEP estimates of auditory thresholds in participants with normal hearing, sensorineural hearing loss, and ANSD. Method We recorded CAEPs at varying intensity levels to speech (i.e., /ba/) and tones (i.e., 1 kHz) to estimate auditory thresholds in normal-hearing adults ( n = 10) and children ( n = 10) and case studies of children with sensorineural hearing loss and ANSD. Results Results showed a pattern of CAEP amplitude decrease and latency increase as stimulus intensities declined until waveform components disappeared near auditory threshold levels. Overall, CAEP thresholds were within 10 dB HL of behavioral thresholds for both stimuli. Conclusions The above findings suggest that CAEPs may be clinically useful in estimating auditory threshold in populations for whom such a method does not currently exist. Physiologic threshold estimation in difficult-to-test clinical populations could lead to earlier intervention and improved outcomes.


1994 ◽  
Vol 37 (3) ◽  
pp. 655-661 ◽  
Author(s):  
Pamela E. Souza ◽  
Christopher W. Turner

This study examined the contributions of various properties of background noise to the speech recognition difficulties experienced by young and elderly listeners with hearing loss. Three groups of subjects participated: young listeners with normal hearing, young listeners with sensorineural hearing loss, and elderly listeners with sensorineural hearing loss. Sensitivity thresholds up to 4000 Hz of the young and elderly groups of listeners with hearing loss were closely matched, and a high-pass masking noise was added to minimize the contributions of high-frequency (above 4000 Hz) thresholds, which were not closely matched. Speech recognition scores for monosyllables were obtained in the high-pass noise alone and in three noise backgrounds. The latter consisted of high-pass noise plus one of three maskers: speechspectrum noise, speech-spectrum noise temporally modulated by the envelope of multi-talker babble, and multi-talker babble. For all conditions, the groups with hearing impairment consistently scored lower than the group with normal hearing. Although there was a trend toward poorer speech-recognition scores as the masker condition more closely resembled the speech babble, the effect of masker condition was not statistically significant. There was no interaction between group and condition, implying that listeners with normal hearing and listeners with hearing loss are affected similarly by the type of background noise when the long-term spectrum of the masker is held constant. A significant effect of age was not observed. In addition, masked thresholds for pure tones in the presence of the speech-spectrum masker were not different for the young and elderly listeners with hearing loss. These results suggest that, for both steady-state and modulated background noises, difficulties in speech recognition for monosyllables are due primarily, and perhaps exclusively, to the presence of sensorineural hearing loss itself, and not to age-specific factors.


1975 ◽  
Vol 18 (2) ◽  
pp. 261-271 ◽  
Author(s):  
Ellen M. Danaher ◽  
J. M. Pickett

Discrimination of second-formant (F2) transitions in synthetic vowels was measured with and without the first formant (F1) present, with F1 and F2 presented monotically vs dichotically, and with the onset of F1 delayed relative to the onset of F2. Twenty-three subjects with sensorineural loss were tested. When F2 was presented alone, discrimination thresholds were the same as those of normal-hearing subjects. In most sensorineural subjects, discrimination was reduced whenever F1 was present in the stimulus. F1 produced three types of masking that reduced the ability to discriminate F2 transitions: upward spread of masking and backward masking occurred when F1 and F2 were presented to the same ear; a type of central masking occurred when F1 and F2 were presented to opposite ears.


2005 ◽  
Vol 16 (06) ◽  
pp. 367-382 ◽  
Author(s):  
Richard H. Wilson ◽  
Deborah G. Weakley

The purpose of this study was to determine if performances on a 500 Hz MLD task and a word-recognition task in multitalker babble covaried or varied independently for listeners with normal hearing and for listeners with hearing loss. Young listeners with normal hearing (n = 25) and older listeners (25 per decade from 40–80 years, n = 125) with sensorineural hearing loss were studied. Thresholds at 500 and 1000 Hz were ≤30 dB HL and ≤40 dB HL, respectively, with thresholds above 1000 Hz <100 dB HL. There was no systematic relationship between the 500 Hz MLD and word-recognition performance in multitalker babble. Higher SoNo and SπNo; thresholds were observed for the older listeners, but the MLDs were the same for all groups. Word recognition in babble in terms of signal-to-babble ratio was on average 6.5 (40- to 49-year-old group) to 10.8 dB (80- to 89-year-old group) poorer for the older listeners with hearing loss. Neither pure-tone thresholds nor word-recognition abilities in quiet accurately predicted word-recognition performance in multitalker babble.


Author(s):  
Seema Panday ◽  
Harsha Kathard ◽  
Wayne J. Wilson

Background: This study continued the development of an isiZulu speech reception threshold (zSRT) test for use with first language, adult speakers of isiZulu.Objectives: The objective of this study was to determine the convergent and concurrent validity of the zSRT test.Methods: One hundred adult isiZulu first-language speakers with normal hearing and 76 first-language, adult isiZulu speakers with conductive or sensorineural hearing losses ranging from mild to severe were assessed on pure tone audiometry and a newly developed isiZulu SRT test. Convergent validity was established through agreement of the zSRT scores with pure tone average (PTA) scores. Concurrent validity was assessed by examining the steepness of the psychometric curve for each word in the zSRT test for each type and degree of hearing loss.Results: Intraclass correlation coefficient analyses showed zSRT scores were in substantial to very high agreement with PTA scores for the normal hearing and hearing loss groups (NH – right ear ICC consistency = 0.78, left ear ICC = 0.67; HL – right ear ICC consistency = 0.97, left ear ICC consistency = 0.95). The mean psychometric slope (%/dB) at 50% correct perception for all words in the zSRT test was 4.92%/dB for the mild conductive hearing loss group, 5.26%/dB for the moderate conductive hearing loss group, 2.85%/dB for the moderately severe sensorineural hearing loss group and 2.47%/dB for the severe sensorineural hearing loss group. These slopes were appropriate for the degree of hearing loss observed in each group.Conclusion: The zSRT test showed convergent and concurrent validity for assessing SRT in first language, adult speakers of isiZulu.


2017 ◽  
Vol 96 (10-11) ◽  
pp. E47-E52
Author(s):  
Raman Wadhera ◽  
Sharad Hernot ◽  
Sat Paul Gulati ◽  
Vijay Kalra

We performed a prospective interventional study to evaluate correlations between hearing thresholds determined by pure-tone audiometry (PTA) and auditory steady-state response (ASSR) testing in two types of patients with hearing loss and a control group of persons with normal hearing. The study was conducted on 240 ears—80 ears with conductive hearing loss, 80 ears with sensorineural hearing loss, and 80 normal-hearing ears. We found that mean threshold differences between PTA results and ASSR testing at different frequencies did not exceed 15 dB in any group. Using Pearson correlation coefficient calculations, we determined that the two responses correlated better in patients with sensorineural hearing loss than in those with conductive hearing loss. We conclude that measuring ASSRs can be an excellent complement to other diagnostic methods in determining hearing thresholds.


1979 ◽  
Vol 22 (4) ◽  
pp. 697-707 ◽  
Author(s):  
Shlomo Silman ◽  
Stanley A. Gelfand

This study examined the precision of the bivariate method in subjects with high-frequency sensorineural hearing loss. The current bivariate data effectively separated normal hearing subjects from those with pure tone averages of ≥32 dB HL, in a manner consistent with the results of Popelka and Trumpf (1976) and Margolis and Fox (1977b). However, for persons with high-frequency losses the prediction of hearing levels from acoustic reflex thresholds (ARTs) appears to be complicated. Moderate hearing losses involving 500, 1000 and 2000 Hz (“speech frequencies”) as well as higher frequencies were identified on the basis of elevated average ARTs for 500, 1000 and 2000 Hz. Normal ears (pure tone averages of ≤30 dB HL) were isolated from others on the basis of position on the bivariate graph. Those with (1) normal hearing in the “speech frequencies” and a high-frequency loss and (2) a mild loss in the “speech frequencies” and a high-frequency loss, could be separated from those with normal hearing by location on the bivariate graph, and from those with moderate (or worse) losses on the basis of average ART for tones. Consideration of these findings is useful in the evaluation of patients at risk for high-frequency loss, such as patients with noise exposure, and is particularly useful in cases of suspected functional impairment within this population. A modification of the bivariate method is suggested which extends its application to patient populations with a large incidence of high frequency sensorineural hearing loss.


Sign in / Sign up

Export Citation Format

Share Document