The Effect of a High Upper Input Limiting Level on Word Recognition in Noise, Sound Quality Preferences, and Subjective Ratings of Real-World Performance

2015 ◽  
Vol 26 (06) ◽  
pp. 547-562
Author(s):  
Kristi Oeding ◽  
Michael Valente

Background: One important factor that plays a role in front-end processing is the analog-to-digital converter within current hearing aids. The average input dynamic range of hearing aids is 96 dB SPL with an upper input limiting level (UILL) of 95–105 dB SPL. The UILL of standard hearing aids could distort loud signals, such as loud speech or music, which have root-mean-square values of 90 and 105 dB SPL with crest factors of 12 dB SPL to 14–20 dB SPL, respectively. This indicates that these loud sounds could create a distorted signal for patients when the input limiting level is reached. Purpose: To examine if significant differences in word recognition in noise, sound quality preferences, and subjective ratings of real-world performance exist between conventional and high UILL hearing aids. Research Design: Words in noise and sound quality preferences were assessed using recordings on a Knowles Electronic Manikin for Acoustic Research with conventional and high UILL hearing aids, different microphone modes, and listening conditions. Participants wore the hearing aids for 2 mo and completed questionnaires on subjective performance. Study Sample: Ten adults with bilateral slight to moderately severe sensorineural hearing loss were recruited. Results: A four-factor repeated-measures analysis of variance (ANOVA) revealed significant differences between the conventional and high UILL across microphone modes and listening conditions for words in noise [F (2, 18) = 6.0; p < 0.05]. A three-factor repeated-measures ANOVA for sound quality preferences revealed a significant difference only for presentation level [F (1, 9) = 81.0; p < 0.001]. A one-factor ANOVA did not reveal significant differences between the conventional and high UILL on subjective ratings of real-world performance. Conclusions: Word recognition and sound quality preferences revealed significant differences between the conventional and high UILL; however, there were no differences in subjective ratings of real-world performance. One participant preferred the conventional UILL, two the high UILL, and seven thought performance was equal, which may be due to the listening environments participants encountered, as evidenced by datalogging.

2016 ◽  
Vol 27 (02) ◽  
pp. 085-102 ◽  
Author(s):  
Bernadette Rakszawski ◽  
Rose Wright ◽  
Jamie H. Cadieux ◽  
Lisa S. Davidson ◽  
Christine Brenner

Background: Cochlear implants (CIs) have been shown to improve children’s speech recognition over traditional amplification when severe-to-profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers preprocessing strategies that apply various algorithms before mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO®). Based on the previous research, the manufacturer’s default preprocessing strategy for pediatrics’ everyday programs combines ASC + ADRO®. Purpose: The purpose of this study is to compare pediatric speech perception performance across various preprocessing strategies while applying a specific programming protocol using increased threshold levels to ensure access to very low-level sounds. Research Design: This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four preprocessing conditions: no preprocessing, ADRO®, ASC, and ASC + ADRO®. Study Sample: Eleven pediatric Cochlear Ltd. CI users were recruited: six bilateral, one unilateral, and four bimodal. Intervention: Four programs, with the participants’ everyday map, were loaded into the processor with different preprocessing strategies applied in each of the four programs: no preprocessing, ADRO®, ASC, and ASC + ADRO®. Data Collection and Analysis: Participants repeated consonant–nucleus–consonant (CNC) words presented at 50 and 70 dB SPL in quiet and Hearing in Noise Test (HINT) sentences presented adaptively with competing R-SpaceTM noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four preprocessing strategies listed above. Test order and conditions were randomized. A repeated-measures analysis of was used to compare each preprocessing strategy for the group. Critical differences were used to determine significant score differences between each preprocessing strategy for individual participants. Results: For CNC words presented at 50 dB SPL, the group data revealed significantly better scores using ASC + ADRO® compared to all other preprocessing conditions while ASC resulted in poorer scores compared to ADRO® and ASC + ADRO®. Group data for HINT sentences presented in 70 dB SPL of R-SpaceTM noise revealed significantly improved scores using ASC and ASC + ADRO® compared to no preprocessing, with ASC + ADRO® scores being better than ADRO® alone scores. Group data for CNC words presented at 70 dB SPL and adaptive HINT sentences presented in 60 dB SPL of R-SpaceTM noise showed no significant difference among conditions. Individual data showed that the preprocessing strategy yielding the best scores varied across measures and participants. Conclusions: Group data reveal an advantage with ASC + ADRO® for speech perception presented at lower levels and in higher levels of background noise. Individual data revealed that the optimal preprocessing strategy varied among participants, indicating that a variety of preprocessing strategies should be explored for each CI user considering his or her performance in challenging listening environments.


2008 ◽  
Vol 139 (2_suppl) ◽  
pp. P57-P57
Author(s):  
Drew M Horlbeck ◽  
Herman A Jenkins ◽  
Ben J Balough ◽  
Michael E Hoffer

Objective The efficacy of the Otologics Fully Implantable Hearing Device (MET) was assessed in adult patients with bilateral moderate to severe sensorineural hearing loss. Methods Surgical insertion of this totally implanted system was identical to the Phase I study. A repeated-measures within-subjects design assessed aided sound field thresholds and speech performances with the subject's own, appropriately fit, walk-in hearing aid(s) and the Otologics Fully Implantable Hearing Device. Results Six- and 12-month Phase II data will be presented. Ten patients were implanted and activated as part Phase II clinical trial. Three patients were lost to long term follow-up due to two coil failures and one ossicular abnormality preventing proper device placement. No significant differences between preoperative (AC = 59 dB, BC = 55 dB) and postoperative (AC = 61 dB, BC = 54 dB) unaided pure tone averages were noted (p < 0.05). Pure tone average implant aided thresholds (41 dB) were equivalent to that of walk-in-aided (37 dB) condition with no significant difference (p < 0.05) between patients’ walk-in-aided individual frequency thresholds and implant-aided thresholds. Word recognition scores and hearing in noise scores were similar between the walk-in-aided and for the implant-aided condition. Patient benefit scales will be presented at all end points. Conclusions Results of the Otologics MET Fully Implantable Hearing Device Phase II trial provide evidence that this fully implantable device is a viable alternative to currently available hearing aids in patients with sensorineural hearing loss.


2012 ◽  
Vol 23 (08) ◽  
pp. 606-615 ◽  
Author(s):  
HaiHong Liu ◽  
Hua Zhang ◽  
Ruth A. Bentler ◽  
Demin Han ◽  
Luo Zhang

Background: Transient noise can be disruptive for people wearing hearing aids. Ideally, the transient noise should be detected and controlled by the signal processor without disrupting speech and other intended input signals. A technology for detecting and controlling transient noises in hearing aids was evaluated in this study. Purpose: The purpose of this study was to evaluate the effectiveness of a transient noise reduction strategy on various transient noises and to determine whether the strategy has a negative impact on sound quality of intended speech inputs. Research Design: This was a quasi-experimental study. The study involved 24 hearing aid users. Each participant was asked to rate the parameters of speech clarity, transient noise loudness, and overall impression for speech stimuli under the algorithm-on and algorithm-off conditions. During the evaluation, three types of stimuli were used: transient noises, speech, and background noises. The transient noises included “knife on a ceramic board,” “mug on a tabletop,” “office door slamming,” “car door slamming,” and “pen tapping on countertop.” The speech sentences used for the test were presented by a male speaker in Mandarin. The background noises included “party noise” and “traffic noise.” All of these sounds were combined into five listening situations: (1) speech only, (2) transient noise only, (3) speech and transient noise, (4) background noise and transient noise, and (5) speech and background noise and transient noise. Results: There was no significant difference on the ratings of speech clarity between the algorithm-on and algorithm-off (t-test, p = 0.103). Further analysis revealed that speech clarity was significant better at 70 dB SLP than 55 dB SPL (p < 0.001). For transient noise loudness: under the algorithm-off condition, the percentages of subjects rating the transient noise to be somewhat soft, appropriate, somewhat loud, and too loud were 0.2, 47.1, 29.6, and 23.1%, respectively. The corresponding percentages under the algorithm-on were 3.0, 72.6, 22.9, and 1.4%, respectively. A significant difference on the ratings of the transient noise loudness was found between the algorithm-on and algorithm-off (t-test, p < 0.001). For overall impression for speech stimuli: under the algorithm-off condition, the percentage of subjects rating the algorithm to be not helpful at all, somewhat helpful, helpful, and very helpful for speech stimuli were 36.5, 20.8, 33.9, and 8.9%, respectively. Under the algorithm-on condition, the corresponding percentages were 35.0, 19.3, 30.7, and 15.0%, respectively. Statistical analysis revealed there was a significant difference on the ratings of overall impression on speech stimuli. The ratings under the algorithm-on condition were significantly more helpful for speech understanding than the ratings under algorithm-off (t-test, p < 0.001). Conclusions: The transient noise reduction strategy appropriately controlled the loudness for most of the transient noises and did not affect the sound quality, which could be beneficial to hearing aid wearers.


Author(s):  
Sandra Thorpe ◽  
Carol Jardine

This study investigated the effectiveness of the application of the Prescription of Gain/Output (POGO) in hearing aid fittings. Six subjects were tested. Each presented with binaural mild to moderate sensorineural hearing losses and were previously fitted monaurally with behind-the-ear aids using modifications of the traditional Carhart (1946) approach. Functional gain requirements stipulated by POGO were calculated from unaided thresholds and compared to actual functional gain measurements. Five subjects, whose functional gain measures were not within prescribed limits, were referred for modification of the gain and frequency responses of their hearing aids and earmoulds. Post-modified functional gain measurements were analysed. The extent to which the required functional gain measurements were met, was investigated statistically in relation to word recognition scores and subjective ratings of perceived benefit. The conclusion reached was that the application of POGO results in improved word recognition scores and self-reported user satisfaction.


2002 ◽  
Vol 11 (1) ◽  
pp. 29-41 ◽  
Author(s):  
Todd Ricketts ◽  
Paula Henry

Hearing aids currently available on the market with both omnidirectional and directional microphone modes often have reduced amplification in the low frequencies when in directional microphone mode due to better phase matching. The effects of this low-frequency gain reduction for individuals with hearing loss in the low frequencies was of primary interest. Changes in sound quality for quiet listening environments following gain compensation in the low frequencies was of secondary interest. Thirty participants were fit with bilateral in-the-ear hearing aids, which were programmed in three ways while in directional microphone mode: no-gain compensation, adaptive-gain compensation, and full-gain compensation. All participants were tested with speech in noise tasks. Participants also made sound quality judgments based on monaural recordings made from the hearing aid. Results support a need for gain compensation for individuals with low-frequency hearing loss of greater than 40 dB HL.


2015 ◽  
Vol 26 (10) ◽  
pp. 815-823 ◽  
Author(s):  
Jijo Pottackal Mathai ◽  
Sabarish Appu

Background: Auditory neuropathy spectrum disorder (ANSD) is a form of sensorineural hearing loss, causing severe deficits in speech perception. The perceptual problems of individuals with ANSD were attributed to their temporal processing impairment rather than to reduced audibility. This rendered their rehabilitation difficult using hearing aids. Although hearing aids can restore audibility, compression circuits in a hearing aid might distort the temporal modulations of speech, causing poor aided performance. Therefore, hearing aid settings that preserve the temporal modulations of speech might be an effective way to improve speech perception in ANSD. Purpose: The purpose of the study was to investigate the perception of hearing aid–processed speech in individuals with late-onset ANSD. Research Design: A repeated measures design was used to study the effect of various compression time settings on speech perception and perceived quality. Study Sample: Seventeen individuals with late-onset ANSD within the age range of 20–35 yr participated in the study. Data Collection and Analysis: The word recognition scores (WRSs) and quality judgment of phonemically balanced words, processed using four different compression settings of a hearing aid (slow, medium, fast, and linear), were evaluated. The modulation spectra of hearing aid–processed stimuli were estimated to probe the effect of amplification on the temporal envelope of speech. Repeated measures analysis of variance and post hoc Bonferroni’s pairwise comparisons were used to analyze the word recognition performance and quality judgment. Results: The comparison between unprocessed and all four hearing aid–processed stimuli showed significantly higher perception using the former stimuli. Even though perception of words processed using slow compression time settings of the hearing aids were significantly higher than the fast one, their difference was only 4%. In addition, there were no significant differences in perception between any other hearing aid–processed stimuli. Analysis of the temporal envelope of hearing aid–processed stimuli revealed minimal changes in the temporal envelope across the four hearing aid settings. In terms of quality, the highest number of individuals preferred stimuli processed using slow compression time settings. Individuals who preferred medium ones followed this. However, none of the individuals preferred fast compression time settings. Analysis of quality judgment showed that slow, medium, and linear settings presented significantly higher preference scores than the fast compression setting. Conclusions: Individuals with ANSD showed no marked difference in perception of speech that was processed using the four different hearing aid settings. However, significantly higher preference, in terms of quality, was found for stimuli processed using slow, medium, and linear settings over the fast one. Therefore, whenever hearing aids are recommended for ANSD, those having slow compression time settings or linear amplification may be chosen over the fast (syllabic compression) one. In addition, WRSs obtained using hearing aid–processed stimuli were remarkably poorer than unprocessed stimuli. This shows that processing of speech through hearing aids might have caused a large reduction of performance in individuals with ANSD. However, further evaluation is needed using individually programmed hearing aids rather than hearing aid–processed stimuli.


2013 ◽  
Vol 24 (09) ◽  
pp. 845-858 ◽  
Author(s):  
Petri Korhonen ◽  
Francis Kuk ◽  
Chi Lau ◽  
Denise Keenan ◽  
Jennifer Schumacher ◽  
...  

Background: Today's compression hearing aids with noise reduction systems may not manage transient noises effectively because of the short duration of these sounds compared to the onset times of the compressors and/or noise reduction algorithms. Purpose: The current study was designed to evaluate the effect of a transient noise reduction (TNR) algorithm on listening comfort, speech intelligibility in quiet, and preferred wearer gain in the presence of transients. Research Design: A single-blinded, repeated-measures design was used. Study Sample: Thirteen experienced hearing aid users with bilaterally symmetrical (≤7.5 dB) sensorineural hearing loss participated in the study. Results: Speech identification in quiet (no transient noise) was identical between the TNR On and the TNR Off conditions. The participants showed subjective preference for the TNR algorithm when “comfortable listening” was used as the criterion. Participants preferred less gain than the default prescription in the presence of transient noise sounds. However, the preferred gain was 2.9 dB higher when the TNR was activated than when it was deactivated. This translated to 12.1% improvement in phoneme identification over the TNR Off condition for soft speech. Conclusions: This study demonstrated that the use of the TNR algorithm would not negatively affect speech identification. The results also suggested that this algorithm may improve listening comfort in the presence of transient noise sounds and ensure consistent use of prescribed gain. Such an algorithm may ensure more consistent audibility across listening environments.


2020 ◽  
Author(s):  
Solveig Christina Voss ◽  
M Kathleen Pichora-Fuller ◽  
Ieda Ishida ◽  
April Emily Pereira ◽  
Julia Seiter ◽  
...  

Background:Conventional directional hearing aid microphone technology would obstruct listening intentions in walking situations when the talker and listener walk side by side. The purpose of the current study was to evaluate hearing aids that use a motion sensor to address listening needs during walking. Methods:Participants were 22 older adults with moderate-to-severe hearing loss and experience using hearing aids. Each participant completed two walks in randomized order, one walk with each of two hearing aid programs: 1) a conventional classifier that activated an adaptive, multiband beamformer in loud environments and 2) a classifier that additionally utilized motion-based beamformer steering. Participants walked along a pre-defined track and completed tasks assessing speech understanding and environmental awareness. Results:Most participants preferred the motion-based beamformer steering for speech understanding, environmental awareness, overall listening, and sound quality (p&lt;0.05). Additionally, measures of speech understanding (p&lt;0.01) and localization of sound stimuli (p&lt;0.05) were significantly better with the motion-based beamformer steering than with the conventional classifier.Conclusion:The results suggest that hearing aid users benefit from classifiers that use motion sensor input to adapt the signal processing according to the user’s activity. The real-world setup of this study had limitations but also high ecological validity.


2008 ◽  
Vol 19 (10) ◽  
pp. 758-773 ◽  
Author(s):  
H Gustav Mueller ◽  
Benjamin W.Y. Hornsby ◽  
Jennifer E. Weber

Background: While there have been many studies of real-world preferred hearing aid gain, few data are available from participants using hearing aids with today's special features activated. Moreover, only limited data have been collected regarding preferred gain for individuals using trainable hearing aids. Purpose: To determine whether real-world preferred hearing aid gain with trainable modern hearing aids is in agreement with previous work in this area, and to determine whether the starting programmed gain setting influences preferred gain outcome. Research Design: An experimental crossover study. Participants were randomly assigned to one of two treatment groups. Following initial treatment, each subject crossed to the opposite group and experienced that treatment. Study Sample: Twenty-two adults with downward sloping sensorineural hearing loss served as participants (mean age 64.5; 16 males, 6 females). All were experienced users of bilateral amplification. Intervention: Using a crossover design, participants were fitted to two different prescriptive gain conditions: VC (volume control) start-up 6 dB above NAL-NL1 (National Acoustic Laboratories—Non-linear 1) target or VC start-up 6 dB below NAL-NL1 target. The hearing aids were used in a 10 to 14 day field trial for each condition, and using the VC, the participants could “train” the overall hearing aid gain to their preferred level. During the field trial, daily hearing aid use was logged, as well as the listening situations experienced by the listeners based on the hearing instrument's acoustic scene analysis. The participants completed a questionnaire at the start and end of each field trial in which they rated loudness perceptions and their satisfaction with aided loudness levels. Results: Because several participants potentially experienced floor or ceiling effects for the range of trainable gain, the majority of the statistical analysis was conducted using 12 of the 22 participants. For both VC-start conditions, the trained preferred gain differed significantly from the NAL-NL1 prescriptive targets. More importantly, the initial start-up gain significantly influenced the trained gain; the mean preferred gain for the +6 dB start condition was approximately 9 dB higher than the preferred gain for the −6 dB start condition, and this difference was statistically significant (p < .001). Partial eta squared (η2) = 0.919, which is a large effect size.Deviation from the NAL-NL1 target was not significantly influenced by the time spent in different listening environments, amount of hearing aid use during the trial period, or amount of hearing loss. Questionnaire data showed more appropriate ratings for loudness and higher satisfaction with loudness for the 6 dB below target VC-start condition. Conclusions: When trainable hearing aids are used, the initial programmed gain of hearing instruments can influence preferred gain in the real world.


2020 ◽  
Vol 8 (1) ◽  
pp. e001507
Author(s):  
Antonio Carlo Bossi ◽  
Valentina De Mori ◽  
Carlotta Galeone ◽  
Davide Pietro Bertola ◽  
Margherita Gaiti ◽  
...  

IntroductionSitagliptin is a dipeptidyl peptidase 4 inhibitor for the treatment of type 2 diabetes (T2D). Limited real-world data on its effectiveness and safety are available from an Italian population.Research design and methodsWe evaluated long-term clinical data from the single-arm PERsistent Sitagliptin Treatment & Outcomes (PERS&O) study, which collected information on 440 patients with TD2 (275 men, 165 women; mean age 64.1 years; disease median duration: 12 years) treated with sitagliptin ‘add-on’. For each patient, we estimated the 10-year cardiovascular (CV) risk using the UK Prospective Diabetes Study (UKPDS) Risk Engine (RE). Drug survival was evaluated using Kaplan-Meier survival curves; repeated measures mixed effects models were used to evaluate the evolution of glycated hemoglobin (HbA1c) and CV risk during sitagliptin treatment.ResultsAt baseline, most patients were overweight or obese (median body mass index (BMI) (kg/m2) 30.2); median HbA1c was 8.4%; median fasting plasma glucose: 172 mg/dL; median UKPDS RE score: 24.8%, being higher in men (median 30.2%) than in women (median 17.0%) as expected. Median follow-up from starting sitagliptin treatment was 5.6 years. From Kaplan-Meier curves, the estimated median drug survival was 32.8 months when considering discontinuation for any cause and 58.4 months when considering discontinuation for loss of efficacy. A significant improvement in HbA1c was evident during treatment with sitagliptin (p<0.01): the reduction was rapid (median HbA1c after 4–6 months: 7.5%) and continued at longer follow-up. When comparing patients treated with sitagliptin versus those stopping sitagliptin and switching to another antihyperglycemic drug, we detected a significant difference in the evolution of HbA1c in favor of patients who continued sitagliptin treatment. The UKPDS RE score at 10 years and the BMI significantly improved during treatment with sitagliptin (p<0.001). Adverse events were relatively uncommon.ConclusionPatients with T2D treated with sitagliptin achieved an improvement in metabolic control and a reduction in CV risk and did not experience relevant adverse events.


Sign in / Sign up

Export Citation Format

Share Document