Instantaneous Binaural Target PSD Estimation for Hearing Aid Noise Reduction in Complex Acoustic Environments

2011 ◽  
Vol 60 (4) ◽  
pp. 1141-1154 ◽  
Author(s):  
A. Homayoun Kamkar-Parsi ◽  
Martin Bouchard
2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


Author(s):  
Isiaka Ajewale Alimi

Digital hearing aids addresses the issues of noise and speech intelligibility that is associated with the analogue types. One of the main functions of the digital signal processor (DSP) of digital hearing aid systems is noise reduction which can be achieved by speech enhancement algorithms which in turn improve system performance and flexibility. However, studies have shown that the quality of experience (QoE) with some of the current hearing aids is not up to expectation in a noisy environment due to interfering sound, background noise and reverberation. It is also suggested that noise reduction features of the DSP can be further improved accordingly. Recently, we proposed an adaptive spectral subtraction algorithm to enhance the performance of communication systems and address the issue of associated musical noise generated by the conventional spectral subtraction algorithm. The effectiveness of the algorithm has been confirmed by different objective and subjective evaluations. In this study, an adaptive spectral subtraction algorithm is implemented using the noise-estimation algorithm for highly non-stationary noisy environments instead of the voice activity detection (VAD) employed in our previous work due to its effectiveness. Also, signal to residual spectrum ratio (SR) is implemented in order to control the amplification distortion for speech intelligibility improvement. The results show that the proposed scheme gives comparatively better performance and can be easily employed in digital hearing aid system for improving speech quality and intelligibility.


2016 ◽  
Vol 27 (09) ◽  
pp. 732-749 ◽  
Author(s):  
Gabriel Aldaz ◽  
Sunil Puria ◽  
Larry J. Leifer

Background: Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose: To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design: An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ˜6 weeks. Study Sample: Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention: Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis: We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results: Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions: The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone.


2012 ◽  
Vol 23 (08) ◽  
pp. 606-615 ◽  
Author(s):  
HaiHong Liu ◽  
Hua Zhang ◽  
Ruth A. Bentler ◽  
Demin Han ◽  
Luo Zhang

Background: Transient noise can be disruptive for people wearing hearing aids. Ideally, the transient noise should be detected and controlled by the signal processor without disrupting speech and other intended input signals. A technology for detecting and controlling transient noises in hearing aids was evaluated in this study. Purpose: The purpose of this study was to evaluate the effectiveness of a transient noise reduction strategy on various transient noises and to determine whether the strategy has a negative impact on sound quality of intended speech inputs. Research Design: This was a quasi-experimental study. The study involved 24 hearing aid users. Each participant was asked to rate the parameters of speech clarity, transient noise loudness, and overall impression for speech stimuli under the algorithm-on and algorithm-off conditions. During the evaluation, three types of stimuli were used: transient noises, speech, and background noises. The transient noises included “knife on a ceramic board,” “mug on a tabletop,” “office door slamming,” “car door slamming,” and “pen tapping on countertop.” The speech sentences used for the test were presented by a male speaker in Mandarin. The background noises included “party noise” and “traffic noise.” All of these sounds were combined into five listening situations: (1) speech only, (2) transient noise only, (3) speech and transient noise, (4) background noise and transient noise, and (5) speech and background noise and transient noise. Results: There was no significant difference on the ratings of speech clarity between the algorithm-on and algorithm-off (t-test, p = 0.103). Further analysis revealed that speech clarity was significant better at 70 dB SLP than 55 dB SPL (p < 0.001). For transient noise loudness: under the algorithm-off condition, the percentages of subjects rating the transient noise to be somewhat soft, appropriate, somewhat loud, and too loud were 0.2, 47.1, 29.6, and 23.1%, respectively. The corresponding percentages under the algorithm-on were 3.0, 72.6, 22.9, and 1.4%, respectively. A significant difference on the ratings of the transient noise loudness was found between the algorithm-on and algorithm-off (t-test, p < 0.001). For overall impression for speech stimuli: under the algorithm-off condition, the percentage of subjects rating the algorithm to be not helpful at all, somewhat helpful, helpful, and very helpful for speech stimuli were 36.5, 20.8, 33.9, and 8.9%, respectively. Under the algorithm-on condition, the corresponding percentages were 35.0, 19.3, 30.7, and 15.0%, respectively. Statistical analysis revealed there was a significant difference on the ratings of overall impression on speech stimuli. The ratings under the algorithm-on condition were significantly more helpful for speech understanding than the ratings under algorithm-off (t-test, p < 0.001). Conclusions: The transient noise reduction strategy appropriately controlled the loudness for most of the transient noises and did not affect the sound quality, which could be beneficial to hearing aid wearers.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Haniyeh Salehi ◽  
Vijay Parsa ◽  
Paula Folkeard

Wireless remote microphones (RMs) transmit the desired acoustic signal to the hearing aid (HA) and facilitate enhanced listening in challenging environments. Fitting and verification of RMs, and benchmarking the relative performance of different RM devices in varied acoustic environments are of significant interest to Audiologists and RM developers. This paper investigates the application of instrumental speech intelligibility and quality metrics for characterizing the RM performance in two acoustic environments with varying amounts of background noise and reverberation. In both environments, two head and torso simulators (HATS) were placed 2 m apart, where one HATS served as the talker and the other served as the listener. Four RM systems were interfaced separately with a HA programmed to match the prescriptive targets for the N4 standard audiogram and placed on the listener HATS. The HA output in varied acoustic conditions was recorded and analyzed offline through computational models predicting speech intelligibility and quality. Results showed performance differences among the four RMs in the presence of noise and/or reverberation, with one RM exhibiting significantly better performance. Clinical implications and applications of these results are discussed.


2020 ◽  
Vol 10 (17) ◽  
pp. 6077
Author(s):  
Gyuseok Park ◽  
Woohyeong Cho ◽  
Kyu-Sung Kim ◽  
Sangmin Lee

Hearing aids are small electronic devices designed to improve hearing for persons with impaired hearing, using sophisticated audio signal processing algorithms and technologies. In general, the speech enhancement algorithms in hearing aids remove the environmental noise and enhance speech while still giving consideration to hearing characteristics and the environmental surroundings. In this study, a speech enhancement algorithm was proposed to improve speech quality in a hearing aid environment by applying noise reduction algorithms with deep neural network learning based on noise classification. In order to evaluate the speech enhancement in an actual hearing aid environment, ten types of noise were self-recorded and classified using convolutional neural networks. In addition, noise reduction for speech enhancement in the hearing aid were applied by deep neural networks based on the noise classification. As a result, the speech quality based on the speech enhancements removed using the deep neural networks—and associated environmental noise classification—exhibited a significant improvement over that of the conventional hearing aid algorithm. The improved speech quality was also evaluated by objective measure through the perceptual evaluation of speech quality score, the short-time objective intelligibility score, the overall quality composite measure, and the log likelihood ratio score.


2013 ◽  
Vol 24 (10) ◽  
pp. 980-991 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente

Background: In the past, bilateral contralateral routing of signals (BICROS) amplification incorporated omnidirectional microphones on the transmitter and receiver sides and some models utilized noise reduction (NR) on the receiver side. Little research has examined the performance of BICROS amplification in background noise. However, previous studies examining contralateral routing of signals (CROS) amplification have reported that the presence of background noise on the transmitter side negatively affected speech recognition. Recently, NR was introduced as a feature on the receiver and transmitter sides of BICROS amplification, which has the potential to decrease the impact of noise on the wanted speech signal by decreasing unwanted noise directed to the transmitter side. Purpose: The primary goal of this study was to examine differences in the reception threshold for sentences (RTS in dB) using the Hearing in Noise Test (HINT) in a diffuse listening environment between unaided and three aided BICROS conditions (no NR, mild NR, and maximum NR) in the Tandem 16 BICROS. A secondary goal was to examine real-world subjective impressions of the Tandem 16 BICROS compared to unaided. Research Design: A randomized block repeated measures single blind design was used to assess differences between no NR, mild NR, and maximum NR listening conditions. Study Sample: Twenty-one adult participants with asymmetric sensorineural hearing loss (ASNHL) and experience with BICROS amplification were recruited from Washington University in St. Louis School of Medicine. Data Collection and Analysis: Participants were fit with the National Acoustic Laboratories’ Nonlinear version 1 prescriptive target (NAL-NL1) with the Tandem 16 BICROS at the initial visit and then verified using real-ear insertion gain (REIG) measures. Participants acclimatized to the Tandem 16 BICROS for 4 wk before returning for final testing. Participants were tested utilizing HINT sentences examining differences in RTS between unaided and three aided listening conditions. Subjective benefit was determined via the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire between the Tandem 16 BICROS and unaided. A repeated measures analysis of variance (ANOVA) was utilized to analyze the results of the HINT and APHAB. Results: Results revealed no significant differences in the RTS between unaided, no NR, mild NR, and maximum NR. Subjective impressions using the APHAB revealed statistically and clinically significant benefit with the Tandem 16 BICROS compared to unaided for the Ease of Communication (EC), Background Noise (BN), and Reverberation (RV) subscales. Conclusions: The RTS was not significantly different between unaided, no NR, mild NR, and maximum NR. None of the three aided listening conditions were significantly different from unaided performance as has been reported for previous studies examining CROS hearing aids. Further, based on comments from participants and previous research studies with conventional hearing aids, manufacturers of BICROS amplification should consider incorporating directional microphones and independent volume controls on the receiver and transmitter sides to potentially provide further improvement in signal-to-noise ratio (SNR) for patients with ASNHL.


2014 ◽  
Vol 25 (06) ◽  
pp. 584-591 ◽  
Author(s):  
Clifford A. Franklin ◽  
Letitia J. White ◽  
Thomas C. Franklin ◽  
Laura Smith-Olinde

Background: The acceptable noise level (ANL) indicates how much background noise a listener is willing to accept while listening to speech. The clinical impact and application of the ANL measure is as a predictor of hearing-aid use. The ANL may also correlate with the percentage of time spent in different listening environments (i.e., quiet, noisy, noisy with speech present, etc). Information retrieved from data logging could confirm this relationship. Data logging, using sound scene analysis, is a method of monitoring the different characteristics of the listening environments that a hearing-aid user experiences during a period. Purpose: The purpose of this study was to determine if the ANL procedure reflects the proportion of time a person spends in different acoustic environments. Research Design: This was a descriptive quasi-experimental design to collect pilot data in which participants were asked to maintain their regular, daily activities while wearing a data-logging device. Study Sample: After completing the ANL measurement, 29 normal-hearing listeners were provided a data-logging device and were instructed on its proper use. Data Collection/Analysis: ANL measures were obtained along with the percentage of time participants spent in listening environments classified as quiet, speech-in-quiet, speech-in-noise, and noise via a data-logging device. Results: An analysis of variance using a general linear model indicated that listeners with low ANL values spent more time in acoustic environments in which background noise was present than did those with high ANL values; the ANL data did not indicate differences in how much time listeners spent in environments of differing intensities. Conclusions: To some degree, the ANL is reflective of the acoustic environments and the amount of noise that the listener is willing to accept; data logging illustrates the acoustic environments in which the listener was present. Clinical implications include, but are not limited to, decisions in patient care regarding the need for additional counseling and/or the use of digital noise reduction and directional microphone technology.


Sign in / Sign up

Export Citation Format

Share Document