Speech Recognition at the Acceptable Noise Level

2015 ◽  
Vol 26 (05) ◽  
pp. 443-450 ◽  
Author(s):  
Susan Gordon-Hickey ◽  
Holly Morlas

Background: The acceptable noise level (ANL) has been proposed as a prehearing aid fitting measure that could be used for hearing aid selection and counseling purposes. Previous work has demonstrated that a listener’s ANL is unrelated to their speech recognition in noise abilities. It is unknown what criteria a listener uses when they select their ANL. To date, no research has explored the amount of speech recognized at the listener’s ANL. Purpose: To examine the amount of speech recognized at the listener’s ANL to determine whether speech recognition in noise is utilized as a factor for setting ANL. Research Design: A descriptive quasi-experimental study was completed. For all listeners, ANL was measured and speech recognition in noise was tested at ANL and at two additional signal-to-noise ratio (SNR) conditions based on the listener’s ANL (ANL + 5 and ANL – 5). Study Sample: Forty-four older adults served as participants. Twenty-seven participants had normal hearing and seventeen participants had mild to moderately-severe, symmetrical, sensorineural hearing loss. Data Collection and Analysis: Acceptance of noise was calculated from the measures of most comfortable listening level and background noise level. Additionally, speech recognition in noise was assessed at three SNRs using the quick speech-in-noise test materials. Results: A significant interaction effect of SNR condition and ANL group occurred for speech recognition. At ANL, a significant difference in speech recognition in noise was found across groups. Those in the mid and high ANL groups had excellent speech recognition at their ANL. Speech recognition in noise at ANL decreased with ANL category. Conclusions: For listeners with mid and high ANLs, speech recognition appears to play a primary role in setting their ANL. For those with low ANLs, speech recognition may contribute to setting their ANL; however, it does not appear to be the primary determiner of ANL. For those with very low ANLs, speech recognition does not appear to be significant variable for setting their ANL.

2009 ◽  
Vol 20 (07) ◽  
pp. 409-421 ◽  
Author(s):  
Jace Wolfe ◽  
Erin C. Schafer ◽  
Benjamin Heldner ◽  
Hans Mülder ◽  
Emily Ward ◽  
...  

Background: Use of personal frequency-modulated (FM) systems significantly improves speech recognition in noise for users of cochlear implants (CIs). Previous studies have shown that the most appropriate gain setting on the FM receiver may vary based on the listening situation and the manufacturer of the CI system. Unlike traditional FM systems with fixed-gain settings, Dynamic FM automatically varies the gain of the FM receiver with changes in the ambient noise level. There are no published reports describing the benefits of Dynamic FM use for CI recipients or how Dynamic FM performance varies as a function of CI manufacturer. Purpose: To evaluate speech recognition of Advanced Bionics Corporation or Cochlear Corporation CI recipients using Dynamic FM vs. a traditional FM system and to examine the effects of Autosensitivity on the FM performance of Cochlear Corporation recipients. Research Design: A two-group repeated-measures design. Participants were assigned to a group according to their type of CI. Study Sample: Twenty-five subjects, ranging in age from 8 to 82 years, met the inclusion criteria for one or more of the experiments. Thirteen subjects used Advanced Bionics Corporation, and 12 used Cochlear Corporation implants. Intervention: Speech recognition was assessed while subjects used traditional, fixed-gain FM systems and Dynamic FM systems. Data Collection and Analysis: In Experiments 1 and 2, speech recognition was evaluated with a traditional, fixed-gain FM system and a Dynamic FM system using the Hearing in Noise Test sentences in quiet and in classroom noise. A repeated-measures analysis of variance (ANOVA) was used to evaluate effects of CI manufacturer (Advanced Bionics and Cochlear Corporation), type of FM system (traditional and dynamic), noise level, and use of Autosensitivity for users of Cochlear Corporation implants. Experiment 3 determined the effects of Autosensitivity on speech recognition of Cochlear Corporation implant recipients when listening through the speech processor microphone with the FM system muted. A repeated-measures ANOVA was used to examine the effects of signal-to-noise ratio and Autosensitivity. Results: In Experiment 1, use of Dynamic FM resulted in better speech recognition in noise for Advanced Bionics recipients relative to traditional FM at noise levels of 65, 70, and 75 dB SPL. Advanced Bionics recipients obtained better speech recognition in noise with FM use when compared to Cochlear Corporation recipients. When Autosensitivity was enabled in Experiment 2, the performance of Cochlear Corporation recipients was equivalent to that of Advanced Bionics recipients, and Dynamic FM was significantly better than traditional FM. Results of Experiment 3 indicate that use of Autosensitivity improves speech recognition in noise of signals directed to the speech processor relative to no Autosensitivity. Conclusions: Dynamic FM should be considered for use with persons with CIs to improve speech recognition in noise. At default CI settings, FM performance is better for Advanced Bionics recipients when compared to Cochlear Corporation recipients, but use of Autosensitivity by Cochlear Corporation users results in equivalent group performance.


2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Liang Xia ◽  
Jingchun He ◽  
Yuanyuan Sun ◽  
Yi Chen ◽  
Qiong Luo ◽  
...  

The acceptable noise level (ANL) was defined by subtracting the background noise level (BNL) from the most comfortable listening level (MCL) (ANL = MCL − BNL). This study compared the ANL obtained through different methods in 20 Chinese subjects with normal hearing. ANL was tested with Mandarin speech materials using a loudspeaker or earphones, with each subject tested by himself or by the audiologist. The presentation and response modes were as follows: (1) loudspeaker with self-adjusted noise levels using audiometer controls (LS method); (2) loudspeaker with the subject signaling the audiologist to adjust speech and noise levels (LA method); (3) earphones with self-adjusted noise levels using audiometer controls (ES method); and (4) earphones with the subject signaling the audiologist to adjust speech and noise levels (EA method). ANL was calculated from three measurements with each method. There was no significant difference in the ANL obtained through different presentation modes or response modes sound. The correlations between ANL, MCL, and BNL obtained from each two methods were significant. In conclusion, the ANL in normal-hearing Mandarin listeners may not be affected by presentation modes such as a loudspeaker or earphones nor is it affected by self-adjusted or audiologist-adjusted response modes. Earphone audiometry is as reliable as sound field audiometry and provides an easy and convenient way to measure ANL.


2012 ◽  
Vol 23 (03) ◽  
pp. 171-181 ◽  
Author(s):  
Rachel A. McArdle ◽  
Mead Killion ◽  
Monica A. Mennite ◽  
Theresa H. Chisolm

Background: The decision to fit one or two hearing aids in individuals with binaural hearing loss has been debated for years. Although some 78% of U.S. hearing aid fittings are binaural (Kochkin , 2010), Walden and Walden (2005) presented data showing that 82% (23 of 28 patients) of their sample obtained significantly better speech recognition in noise scores when wearing one hearing aid as opposed to two. Purpose: To conduct two new experiments to fuel the monaural/binaural debate. The first experiment was a replication of Walden and Walden (2005), whereas the second experiment examined the use of binaural cues to improve speech recognition in noise. Research Design: A repeated measures experimental design. Study Sample: Twenty veterans (aged 59–85 yr), with mild to moderately severe binaurally symmetrical hearing loss who wore binaural hearing aids were recruited from the Audiology Department at the Bay Pines VA Healthcare System. Data Collection and Analysis: Experiment 1 followed the procedures of the Walden and Walden study, where signal-to-noise ratio (SNR) loss was measured using the Quick Speech-in-Noise (QuickSIN) test on participants who were aided with their current hearing aids. Signal and noise were presented in the sound booth at 0° azimuth under five test conditions: (1) right ear aided, (2) left ear aided, (3) both ears aided, (4) right ear aided, left ear plugged, and (5) unaided. The opposite ear in (1) and (2) was left open. In Experiment 2, binaural Knowles Electronics Manikin for Acoustic Research (KEMAR) manikin recordings made in Lou Malnati's pizza restaurant during a busy period provided a typical real-world noise, while prerecorded target sentences were presented through a small loudspeaker located in front of the KEMAR manikin. Subjects listened to the resulting binaural recordings through insert earphones under the following four conditions: (1) binaural, (2) diotic, (3) monaural left, and (4) monaural right. Results: Results of repeated measures ANOVAs demonstrated that the best speech recognition in noise performance was obtained by most participants with both ears aided in Experiment 1 and in the binaural condition in Experiment 2. Conclusions: In both experiments, only 20% of our subjects did better in noise with a single ear, roughly similar to the earlier Jerger et al (1993) finding that 8–10% of elderly hearing aid users preferred one hearing aid.


Author(s):  
Andrew J. Vermiglio ◽  
Lauren Leclerc ◽  
Meagan Thornton ◽  
Hannah Osborne ◽  
Elizabeth Bonilla ◽  
...  

Purpose The goal of this study was to determine the ability of the AzBio speech recognition in noise (SRN) test to distinguish between groups of participants with and without a self-reported SRN disorder and a self-reported signal-to-noise ratio (SNR) loss. Method Fifty-four native English-speaking young adults with normal pure-tone thresholds (≤ 25 dB HL, 0.25–6.0 kHz) participated. Individuals who reported hearing difficulty in a noisy restaurant (Reference Standard 1) were placed in the SRN disorder group. SNR loss groups were created based on the self-report of the ability to hear Hearing in Noise Test (HINT) sentences in steady-state speech-shaped noise, four-talker babble, and 20-talker babble in a controlled listening environment (Reference Standard 2). Participants with HINT thresholds poorer than or equal to the median were assigned to the SNR loss group. Results The area under the curve from the receiver operating characteristics curves revealed that the AzBio test was not a significant predictor of an SRN disorder, or an SNR loss using the steady-state noise Reference Standard 2 condition. However, the AzBio was a significant predictor of an SNR loss using the four-talker babble and 20-talker babble Reference Standard 2 conditions ( p < .05). The AzBio was a significant predictor of an SNR loss when using the average HINT thresholds across the three Reference Standard 2 masker conditions (area under the curve = .79, p = .001). Conclusions The AzBio test was not a significant predictor of a self-reported SRN disorder or a self-reported SNR loss in steady-state noise. However, it was a significant predictor of a self-reported SNR loss in babble noise and the average across all noise conditions. A battery of reference standard tests with a range of maskers in a controlled listening environment is recommended for diagnostic accuracy evaluations of SRN tests.


2021 ◽  
Vol 32 (07) ◽  
pp. 469-476
Author(s):  
Maria Madalena Canina Pinheiro ◽  
Patricia Cotta Mancini ◽  
Alexandra Dezani Soares ◽  
Ângela Ribas ◽  
Danielle Penna Lima ◽  
...  

Abstract Background Speech recognition in noisy environments is a challenge for both cochlear implant (CI) users and device manufacturers. CI manufacturers have been investing in technological innovations for processors and researching strategies to improve signal processing and signal design for better aesthetic acceptance and everyday use. Purpose This study aimed to compare speech recognition in CI users using off-the-ear (OTE) and behind-the-ear (BTE) processors. Design A cross-sectional study was conducted with 51 CI recipients, all users of the BTE Nucleus 5 (CP810) sound processor. Speech perception performances were compared in quiet and noisy conditions using the BTE sound processor Nucleus 5 (N5) and OTE sound processor Kanso. Each participant was tested with the Brazilian-Portuguese version of the hearing in noise test using each sound processor in a randomized order. Three test conditions were analyzed with both sound processors: (i) speech level fixed at 65 decibel sound pressure level in a quiet, (ii) speech and noise at fixed levels, and (iii) adaptive speech levels with a fixed noise level. To determine the relative performance of OTE with respect to BTE, paired comparison analyses were performed. Results The paired t-tests showed no significant difference between the N5 and Kanso in quiet conditions. In all noise conditions, the performance of the OTE (Kanso) sound processor was superior to that of the BTE (N5), regardless of the order in which they were used. With the speech and noise at fixed levels, a significant mean 8.1 percentage point difference was seen between Kanso (78.10%) and N5 (70.7%) in the sentence scores. Conclusion CI users had a lower signal-to-noise ratio and a higher percentage of sentence recognition with the OTE processor than with the BTE processor.


2014 ◽  
Vol 25 (06) ◽  
pp. 529-540 ◽  
Author(s):  
Erin C. Schafer ◽  
Danielle Bryant ◽  
Katie Sanders ◽  
Nicole Baldus ◽  
Katherine Algier ◽  
...  

Background: Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the abovementioned populations of children with normal hearing and those with auditory-listening problems. Purpose: The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. Research Design: A two-group, cross-sectional design was used in the present study. Study Sample: Twenty-six typically functioning children, ages 5–12 yr, with normal hearing sensitivity participated in the study. Intervention: Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Data Collection and Analysis: Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000–4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child’s ear. After completion of the fitting, speech recognition in noise at a –5 signal-to-noise ratio and loudness ratings at a +5 signal-to-noise ratio were measured in four conditions: (1) no FM system, (2) FM receiver on the right ear, (3) FM receiver on the left ear, and (4) bilateral FM system. Results: The results of this study suggested that the slightly modified AAA real-ear measurement procedures resulted in a valid fitting of one FM system on children with normal hearing. On average, prescriptive targets were met for 1000, 2000, 3000, and 4000 Hz within 3 dB, and maximum output of the FM system never exceeded and was significantly lower than predicted uncomfortable loudness levels for the children. There was a minimal change in the real-ear unaided response when the open-ear FM receiver was placed into the ear. Use of the FM system on one or both ears resulted in significantly better speech recognition in noise relative to a no-FM condition, and the unilateral and bilateral FM receivers resulted in a comfortably loud signal when listening in background noise. Conclusions: Real-ear measures are critical for obtaining an appropriate fit of an FM system on children with normal hearing.


2019 ◽  
Vol 30 (07) ◽  
pp. 607-618 ◽  
Author(s):  
Thomas Wesarg ◽  
Susan Arndt ◽  
Konstantin Wiebe ◽  
Frauke Schmid ◽  
Annika Huber ◽  
...  

AbstractPrevious research in cochlear implant (CI) recipients with bilateral severe-to-profound sensorineural hearing loss showed improvements in speech recognition in noise using remote wireless microphone systems. However, to our knowledge, no previous studies have addressed the benefit of these systems in CI recipients with single-sided deafness.The objective of this study was to evaluate the potential improvement in speech recognition in noise for distant speakers in single-sided deaf (SSD) CI recipients obtained using the digital remote wireless microphone system, Roger. In addition, we evaluated the potential benefit in normal hearing (NH) participants gained by applying this system.Speech recognition in noise for a distant speaker in different conditions with and without Roger was evaluated with a two-way repeated-measures design in each group, SSD CI recipients, and NH participants. Post hoc analyses were conducted using pairwise comparison t-tests with Bonferroni correction.Eleven adult SSD participants aided with CIs and eleven adult NH participants were included in this study.All participants were assessed in 15 test conditions (5 listening conditions × 3 noise levels) each. The listening conditions for SSD CI recipients included the following: (I) only NH ear and CI turned off, (II) NH ear and CI (turned on), (III) NH ear and CI with Roger 14, (IV) NH ear with Roger Focus and CI, and (V) NH ear with Roger Focus and CI with Roger 14. For the NH participants, five corresponding listening conditions were chosen: (I) only better ear and weaker ear masked, (II) both ears, (III) better ear and weaker ear with Roger Focus, (IV) better ear with Roger Focus and weaker ear, and (V) both ears with Roger Focus. The speech level was fixed at 65 dB(A) at 1 meter from the speech-presenting loudspeaker, yielding a speech level of 56.5 dB(A) at the recipient's head. Noise levels were 55, 65, and 75 dB(A). Digitally altered noise recorded in school classrooms was used as competing noise. Speech recognition was measured in percent correct using the Oldenburg sentence test.In SSD CI recipients, a significant improvement in speech recognition was found for all listening conditions with Roger (III, IV, and V) versus all no-Roger conditions (I and II) at the higher noise levels (65 and 75 dB[A]). NH participants significantly benefited from the application of Roger in noise for higher levels, too. In both groups, no significant difference was detected between any of the different listening conditions at 55 dB(A) competing noise. There was also no significant difference between any of the Roger conditions III, IV, and V across all noise levels.The application of the advanced remote wireless microphone system, Roger, in SSD CI recipients provided significant benefits in speech recognition for distant speakers at higher noise levels. In NH participants, the application of Roger also produced a significant benefit in speech recognition in noise.


2011 ◽  
Vol 22 (02) ◽  
pp. 065-080 ◽  
Author(s):  
Alison M. Brockmeyer ◽  
Lisa G. Potts

Background: Difficulty understanding in background noise is a common complaint of cochlear implant (CI) recipients. Programming options are available to improve speech recognition in noise for CI users including automatic dynamic range optimization (ADRO), autosensitivity control (ASC), and a two-stage adaptive beamforming algorithm (BEAM). However, the processing option that results in the best speech recognition in noise is unknown. In addition, laboratory measures of these processing options often show greater degrees of improvement than reported by participants in everyday listening situations. To address this issue, Compton-Conley and colleagues developed a test system to replicate a restaurant environment. The R-SPACE™ consists of eight loudspeakers positioned in a 360 degree arc and utilizes a recording made at a restaurant of background noise. Purpose: The present study measured speech recognition in the R-SPACE with four processing options: standard dual-port directional (STD), ADRO, ASC, and BEAM. Research Design: A repeated-measures, within-subject design was used to evaluate the four different processing options at two noise levels. Study Sample: Twenty-seven unilateral and three bilateral adult Nucleus Freedom CI recipients. Intervention: The participants’ everyday program (with no additional processing) was used as the STD program. ADRO, ASC, and BEAM were added individually to the STD program to create a total of four programs. Data Collection and Analysis: Participants repeated Hearing in Noise Test sentences presented at 0 degrees azimuth with R-SPACE restaurant noise at two noise levels, 60 and 70 dB SPL. The reception threshold for sentences (RTS) was obtained for each processing condition and noise level. Results: In 60 dB SPL noise, BEAM processing resulted in the best RTS, with a significant improvement over STD and ADRO processing. In 70 dB SPL noise, ASC and BEAM processing had significantly better mean RTSs compared to STD and ADRO processing. Comparison of noise levels showed that STD and BEAM processing resulted in significantly poorer RTSs in 70 dB SPL noise compared to the performance with these processing conditions in 60 dB SPL noise. Bilateral participants demonstrated a bilateral improvement compared to the better monaural condition for both noise levels and all processing conditions, except ASC in 60 dB SPL noise. Conclusions: The results of this study suggest that the use of processing options that utilize noise reduction, like those available in ASC and BEAM, improve a CI recipient's ability to understand speech in noise in listening situations similar to those experienced in the real world. The choice of the best processing option is dependent on the noise level, with BEAM best at moderate noise levels and ASC best at loud noise levels for unilateral CI recipients. Therefore, multiple noise programs or a combination of processing options may be necessary to provide CI users with the best performance in a variety of listening situations.


2014 ◽  
Vol 25 (02) ◽  
pp. 141-153 ◽  
Author(s):  
Yu-Hsiang Wu ◽  
Elizabeth Stangl ◽  
Carol Pang ◽  
Xuyang Zhang

Background: Little is known regarding the acoustic features of a stimulus used by listeners to determine the acceptable noise level (ANL). Features suggested by previous research include speech intelligibility (noise is unacceptable when it degrades speech intelligibility to a certain degree; the intelligibility hypothesis) and loudness (noise is unacceptable when the speech-to-noise loudness ratio is poorer than a certain level; the loudness hypothesis). Purpose: The purpose of the study was to investigate if speech intelligibility or loudness is the criterion feature that determines ANL. To achieve this, test conditions were chosen so that the intelligibility and loudness hypotheses would predict different results. In Experiment 1, the effect of audiovisual (AV) and binaural listening on ANL was investigated; in Experiment 2, the effect of interaural correlation (ρ) on ANL was examined. Research Design: A single-blinded, repeated-measures design was used. Study Sample: Thirty-two and twenty-five younger adults with normal hearing participated in Experiments 1 and 2, respectively. Data Collection and Analysis: In Experiment 1, both ANL and speech recognition performance were measured using the AV version of the Connected Speech Test (CST) in three conditions: AV-binaural, auditory only (AO)-binaural, and AO-monaural. Lipreading skill was assessed using the Utley lipreading test. In Experiment 2, ANL and speech recognition performance were measured using the Hearing in Noise Test (HINT) in three binaural conditions, wherein the interaural correlation of noise was varied: ρ = 1 (NoSo [a listening condition wherein both speech and noise signals are identical across two ears]), −1 (NπSo [a listening condition wherein speech signals are identical across two ears whereas the noise signals of two ears are 180 degrees out of phase]), and 0 (NuSo [a listening condition wherein speech signals are identical across two ears whereas noise signals are uncorrelated across ears]). The results were compared to the predictions made based on the intelligibility and loudness hypotheses. Results: The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the NoSo, NπSo, and NuSo conditions negated the intelligibility hypothesis because binaural processing benefit (NπSo re: NoSo, and NuSo re: NoSo) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (NoSo ≈ NπSo ≈ NuSo ANL) was more consistent with what was predicted by the loudness hypothesis (NoSo ≈ NπSo < NuSo ANL) than by the intelligibility hypothesis (NπSo < NuSo < NoSo ANL). The results of the binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural < monaural ANL) was not consistent with the prediction made based on previous binaural loudness summation research (binaural ≥ monaural ANL). Conclusions: The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions.


Author(s):  
Julie Beadle ◽  
Jeesun Kim ◽  
Chris Davis

Purpose: Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked. Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR). Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not. Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments. Supplemental Material https://doi.org/10.23641/asha.16879549


Sign in / Sign up

Export Citation Format

Share Document