Influence of Frequency Selectivity on Comodulation Masking Release in Normal-Hearing Listeners

1993 ◽  
Vol 36 (2) ◽  
pp. 410-423 ◽  
Author(s):  
Joseph W. Hall ◽  
John H. Grose ◽  
Brian C. J. Moore

Experiments 1 and 2 investigated the effect of frequency selectivity on comodulation masking release (CMR) in normal-hearing subjects, examining conditions where frequency selectivity was relatively good (low masker level at both low [500-Hz] and high [2500-Hz] signal frequency, and high masker level at low signal frequency) and where frequency selectivity was somewhat degraded (high masker level and high signal frequency). The first experiment investigated CMR in conditions where a narrow modulated noise band was centered on the signal frequency, and a wider comodulated noise band was located below the band centered on the signal frequency. Signal frequencies were 500 and 2000 Hz. The masker level and the frequency separation between the on-signal and comodulated flanking band were varied. In addition to conditions where the flanking band and on-signal band were presented at the same spectrum level, conditions were included where the spectrum level of the flanking band was 10-dB higher than that of the on-signal band, in order to accentuate effects of reduced frequency selectivity. Results indicated that CMR was reduced at the 2000-Hz region when masker level was high, when the frequency separation between on-signal and flanking band was small, and when a 10-dB level disparity existed between the on-signal and flanking band. In the second experiment, CMR was investigated for narrow comodulated noise bands, presented either without any additional sound or in the presence of a random noise background. CMR increased slightly as the masker level increased, except at 2500 Hz when the noise background was present. The decrease in CMR at 2500 Hz with the high masker level and with a noise background present could be explained in terms of reduced frequency selectivity. In a third experiment, we compared performance for equal absolute bandwidth maskers at a low (500-Hz) and a high (2000-Hz) stimulus frequency. Results here suggested that detection in modulated noise may be reduced due to a reduction in the number of quasi-independent auditory filters contributing temporal envelope information. The effects found in the present study using normal-hearing listeners under conditions of degraded frequency selectivity may be useful in understanding part of the reduction of CMR that occurs in cochlear-impaired listeners having reduced frequency selectivity.

1990 ◽  
Vol 33 (1) ◽  
pp. 96-102 ◽  
Author(s):  
Kathleen Veloso ◽  
Joseph W. Hall ◽  
John H. Grose

Frequency selectivity and comodulation masking release (CMR) for a 1000-Hz signal frequency were examined in 6-year-old children and adults. An abbreviated measure of frequency selectivity was also conducted for a 500-Hz signal. Frequency selectivity was measured using a notched-noise masking method, and CMR was measured using narrow bands of noise whose amplitude envelopes were either uncorrelated or correlated. There were 6 listeners in each age group. No differences were observed between the adults and children for either auditory measure. Similarly, no differences were observed in the ability to detect a pure-tone signal in a relatively wideband noise masker. When the masking noise was narrowband, however, the masked thresholds of the children were higher than those of the adults. Two characteristics that distinguish narrowband noise from wideband noise are: (1) narrowband noise has a pitch quality corresponding to its center frequency, whereas wideband noise does not have a definite pitch; (2) the intensity fluctuations are relatively greater in narrowband noise than in wideband noise. This may suggest that 6-year-old children have a reduced ability to detect signals in noise backgrounds where the signal has perceptual qualities similar to the noise, or in noise backgrounds having a high degree of fluctuation.


1980 ◽  
Vol 23 (3) ◽  
pp. 646-669 ◽  
Author(s):  
Mary Florentine ◽  
Søren Buus ◽  
Bertram Scharf ◽  
Eberhard Zwicker

This study compares frequency selectivity—as measured by four different methods—in observers with normal hearing and in observers with conductive (non-otosclerotic), otosclerotic, noise-induced, or degenerative hearing losses. Each category of loss was represented by a group of 7 to 10 observers, who were tested at center frequencies of 500 Hz and 4000 Hz. For each group, the following four measurements were made: psychoacoustical tuning curves, narrow-band masking, two-tone masking, and loudness summation. Results showed that (a) frequency selectivity was reduced at frequencies where a cochlear hearing loss was present, (b) frequency selectivity was reduced regardless of the test level at which normally-hearing observers and observers with cochlear impairment were compared, (c) all four measures of frequency selectivity were significantly correlated and (d) reduced frequency selectivity was positively correlated with the amount of cochlear hearing loss.


1997 ◽  
Vol 101 (3) ◽  
pp. 1600-1610 ◽  
Author(s):  
Sid P. Bacon ◽  
Jungmee Lee ◽  
Daniel N. Peterson ◽  
Dawne Rainey

2019 ◽  
Vol 23 ◽  
pp. 233121651984198 ◽  
Author(s):  
Brian C. J. Moore ◽  
Jie Wan ◽  
Ajanth Varathanathan ◽  
Sophie Naddell ◽  
Thomas Baer

It is widely believed that the frequency selectivity of the auditory system is largely determined by processes occurring in the cochlea. If so, musical training would not be expected to influence frequency selectivity. Consistent with this, auditory filter shapes for low center frequencies do not differ for musicians and nonmusicians. However, it has been reported that psychophysical tuning curves (PTCs) at 4000 Hz were sharper for musicians than for nonmusicians. This study explored the origin of the discrepancy across studies. Frequency selectivity was estimated for musicians and nonmusicians using three methods: fast PTCs with a masker that swept in frequency, “traditional” PTCs obtained using several fixed masker center frequencies, and the notched-noise method. The signal frequency was 4000 Hz. The data were fitted assuming that each side of the auditory filter had the shape of a rounded-exponential function. The sharpness of the auditory filters, estimated as the Q10 values, did not differ significantly between musicians and nonmusicians for any of the methods, but detection efficiency tended to be higher for the musicians. This is consistent with the idea that musicianship influences auditory proficiency but does not influence the peripheral processes that determine the frequency selectivity of the auditory system.


1993 ◽  
Vol 36 (6) ◽  
pp. 1306-1314 ◽  
Author(s):  
Joseph W. Hall ◽  
John H. Grose

Monaural envelope correlation perception was investigated in listeners with normal hearing and in listeners with cochlear hearing loss. Using a three-interval forced-choice procedure, the task of the subject was to identify the one interval out of three where the noise bands had correlated envelopes. Performance was determined as a function of the spectral separation between noise bands (Δf of 250, 500, or 1000 Hz) and the number of noise bands present (two, three, or five). Although individual differences existed, the results generally indicated better performance for the listeners with normal hearing when the Δf between bands was relatively small; however, there was no significant effect of hearing loss when the frequency separation between bands was greater than 250 Hz. The listeners with normal hearing generally showed decreased performance with increasing Δf, whereas the performance of many of the listeners with hearing impairment usually did not change appreciably with variation in Δf. Both groups of listeners showed improved performance with increasing number of noise bands present for the 500-Hz Δf. Only the listeners with hearing impairment showed significantly improved performance with increasing band number for the 250-Hz Δf; neither group showed improved performance with increasing band number for the 1000-Hz Δf. With five bands present, the performance of the listeners with hearing impairment did not differ significantly from that of the listeners with normal hearing, even for the 250-Hz Δf. It is possible that the poor performance of many of the listeners with hearing impairment when Δf is small may be due to relatively poor peripheral frequency analysis. It is difficult to determine the role of within-channel versus across-channel cues in the effects obtained.


Sign in / Sign up

Export Citation Format

Share Document