Discrimination of interaural temporal disparities by normal‐hearing listeners and listeners with high‐frequency sensorineural hearing loss

1986 ◽  
Vol 79 (5) ◽  
pp. 1541-1547 ◽  
Author(s):  
Walter J. Smoski ◽  
Constantine Trahiotis
1975 ◽  
Vol 18 (3) ◽  
pp. 444-455 ◽  
Author(s):  
Brian E. Walden ◽  
Allen A. Montgomery

Judgments of consonant similarity were obtained from subjects who had normal hearing, high-frequency sensorineural hearing loss, or relatively flat sensorineural hearing loss. The individual differences model through program INDSCAL was used to derive a set of perceptual features empirically from the similarity judgments, and to group the subjects on the basis of strength of feature usage. The analysis revealed that sonorance was the dominant dimension in the similarity judgments of the subjects with high-frequency hearing losses, while sibilance tended to dominate the judgments of the subjects with flat audiometric configurations. The normal-hearing subjects tended to weight these two dimensions approximately equally. These differences in similarity judgments were observed based upon audiometric configuration, despite the fact that the two hearing-impaired groups were not unique in word-recognition ability.


2021 ◽  
pp. 1-10
Author(s):  
Jennifer E. Gonzalez ◽  
Frank E. Musiek

Purpose Clinical use of electrophysiologic measures has been limited to use of brief stimuli to evoke responses. While brief stimuli elicit onset responses in individuals with normal hearing and normal central auditory nervous system (CANS) function, responses represent the integrity of a fraction of the mainly excitatory central auditory neurons. Longer stimuli could provide information regarding excitatory and inhibitory CANS function. Our goal was to measure the onset–offset N1–P2 auditory evoked response in subjects with normal hearing and subjects with moderate high-frequency sensorineural hearing loss (HFSNHL) to determine whether the response can be measured in individuals with moderate HFSNHL and, if so, whether waveform components differ between participant groups. Method Waveforms were obtained from 10 participants with normal hearing and seven participants with HFSNHL aged 40–67 years using 2,000-ms broadband noise stimuli with 40-ms rise–fall times presented at 50 dB SL referenced to stimulus threshold. Amplitudes and latencies were analyzed via repeated-measures analysis of variance (ANOVA). N1 and P2 onset latencies were compared to offset counterparts via repeated-measures ANOVA after subtracting 2,000 ms from the offset latencies to account for stimulus duration. Offset-to-onset trough-to-peak amplitude ratios between groups were compared using a one-way ANOVA. Results Responses were evoked from all participants. There were no differences between participant groups for the waveform components measured. Response × Participant Group interactions were not significant. Offset N1–P2 latencies were significantly shorter than onset counterparts after adjusting for stimulus duration (normal hearing: 43 ms shorter; HFSNHL: 47 ms shorter). Conclusions Onset–offset N1–P2 responses were resistant to moderate HFSNHL. It is likely that the onset was elicited by the presentation of a sound in silence and the offset by the change in stimulus envelope from plateau to fall, suggesting an excitatory onset response and an inhibitory-influenced offset response. Results indicated this protocol can be used to investigate CANS function in individuals with moderate HFSNHL. Supplemental Material https://doi.org/10.23641/asha.14669007


1979 ◽  
Vol 22 (4) ◽  
pp. 697-707 ◽  
Author(s):  
Shlomo Silman ◽  
Stanley A. Gelfand

This study examined the precision of the bivariate method in subjects with high-frequency sensorineural hearing loss. The current bivariate data effectively separated normal hearing subjects from those with pure tone averages of ≥32 dB HL, in a manner consistent with the results of Popelka and Trumpf (1976) and Margolis and Fox (1977b). However, for persons with high-frequency losses the prediction of hearing levels from acoustic reflex thresholds (ARTs) appears to be complicated. Moderate hearing losses involving 500, 1000 and 2000 Hz (“speech frequencies”) as well as higher frequencies were identified on the basis of elevated average ARTs for 500, 1000 and 2000 Hz. Normal ears (pure tone averages of ≤30 dB HL) were isolated from others on the basis of position on the bivariate graph. Those with (1) normal hearing in the “speech frequencies” and a high-frequency loss and (2) a mild loss in the “speech frequencies” and a high-frequency loss, could be separated from those with normal hearing by location on the bivariate graph, and from those with moderate (or worse) losses on the basis of average ART for tones. Consideration of these findings is useful in the evaluation of patients at risk for high-frequency loss, such as patients with noise exposure, and is particularly useful in cases of suspected functional impairment within this population. A modification of the bivariate method is suggested which extends its application to patient populations with a large incidence of high frequency sensorineural hearing loss.


1981 ◽  
Vol 24 (4) ◽  
pp. 506-513 ◽  
Author(s):  
David Y. Chung

Quiet and masked thresholds were obtained from 5 subjects with normal hearing and 31 subjects with sensorineural hearing loss. Maskers were pure tones varying in frequency and intensity. The hearing-impaired subjects showed an abnormal spread of masking when masking was measured in terms of masked threshold. The abnormal spread of masking seems to be related to both the hearing threshold of the masker and the quiet threshold of the test signal. The notch due to detection of combination tones found on the high-frequency slope of masked audiograms of normal subjects (obscuring the actual extent to which the signal is masked) tends to accentuate the apparent abnormal upward spread of masking in the hearing-impaired subjects. The abnormal spread in the latter case is real, but comparison with the normal case must take the notch into account.


2006 ◽  
Vol 263 (7) ◽  
pp. 608-613 ◽  
Author(s):  
A. A. Sazgar ◽  
V. Dortaj ◽  
K. Akrami ◽  
S. Akrami ◽  
A. R. Karimi Yazdi

Author(s):  
Jawahar Antony P ◽  
Animesh Barman

Background and Aim: Auditory stream segre­gation is a phenomenon that splits sounds into different streams. The temporal cues that contri­bute for stream segregation have been previ­ously studied in normal hearing people. In peo­ple with sensorineural hearing loss (SNHL), the cues for temporal envelope coding is not usually affected, while the temporal fine structure cues are affected. These two temporal cues depend on the amplitude modulation frequency. The present study aimed to evaluate the effect of sin­usoidal amplitude modulated (SAM) broadband noises on stream segregation in individuals with SNHL. Methods: Thirty normal hearing subjects and 30 subjects with mild to moderate bilateral SNHL participated in the study. Two experi­ments were performed; in the first experiment, the AB sequence of broadband SAM stimuli was presented, while in the second experiment, only B sequence was presented. A low (16 Hz) and a high (256 kHz) standard modulation fre­quency were used in these experiments. The subjects were asked to find the irregularities in the rhythmic sequence. Results: Both the study groups could identify the irregularities similarly in both the experi­ments. The minimum cumulative delay was sli­ghtly higher in the SNHL group. Conclusion: It is suggested that the temporal cues provided by the broadband SAM noises for low and high standard modulation frequencies were not used for stream segregation by either normal hearing subjects or those with SNHL. Keywords: Stream segregation; sinusoidal amplitude modulation; sensorineural hearing loss


Sign in / Sign up

Export Citation Format

Share Document