A new procedure for measuring peripheral compression in normal-hearing and hearing-impaired listeners

2001 ◽  
Vol 110 (4) ◽  
pp. 2045-2064 ◽  
Author(s):  
David A. Nelson ◽  
Anna C. Schroder ◽  
Magdalena Wojtczak
2013 ◽  
Vol 24 (04) ◽  
pp. 274-292 ◽  
Author(s):  
Van Summers ◽  
Matthew J. Makashay ◽  
Sarah M. Theodoroff ◽  
Marjorie R. Leek

Background: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. Purpose: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. Research Design: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. Study Sample: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). Results: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. Conclusions: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1979 ◽  
Vol 22 (2) ◽  
pp. 236-246 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Ruth M. Lawarre

Perceptual patterns in rating dissimilarities among 24 CVs were investigated for a group of normal-hearing and two groups of hearing-impaired subjects (one group with flat, and one group with sloping, sensorineural losses). Stimuli were presented binaurally at most comfortable loudness level and subjects rated the 576 paired stimuli on a 1–7 equal-appearing interval scale. Ratings were submitted to individual group and combined INDSCAL analyses to describe features used by the subjects in their perception of the speech stimuli. Results revealed features such as sibilant, sonorant, plosive and place. Furthermore, normal and hearing-impaired subjects used similar features, and subjects' weightings of features were relatively independent of their audiometric configurations. Results are compared to those of previous studies.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


2021 ◽  
Vol 25 ◽  
pp. 233121652110161
Author(s):  
Michal Fereczkowski ◽  
Torsten Dau ◽  
Ewen N. MacDonald

While an audiogram is a useful method of characterizing hearing loss, it has been suggested that including a complementary, suprathreshold measure, for example, a measure of the status of the cochlear active mechanism, could lead to improved diagnostics and improved hearing-aid fitting in individual listeners. While several behavioral and physiological methods have been proposed to measure the cochlear-nonlinearity characteristics, evidence of a good correspondence between them is lacking, at least in the case of hearing-impaired listeners. If this lack of correspondence is due to, for example, limited reliability of one of such measures, it might be a reason for limited evidence of the benefit of measuring peripheral compression. The aim of this study was to investigate the relation between measures of the peripheral-nonlinearity status estimated using two psychoacoustical methods (based on the notched-noise and temporal-masking curve methods) and otoacoustic emissions, on a large sample of hearing-impaired listeners. While the relation between the estimates from the notched-noise and the otoacoustic emissions experiments was found to be stronger than predicted by the audiogram alone, the relations between the two measures and the temporal-masking based measure did not show the same pattern, that is, the variance shared by any of the two measures with the temporal-masking curve-based measure was also shared with the audiogram.


Sign in / Sign up

Export Citation Format

Share Document