scholarly journals Enhancement of the Auditory Late Response (N1-P2) by Presentation of Stimuli From an Unexpected Location

2019 ◽  
Vol 30 (06) ◽  
pp. 451-458 ◽  
Author(s):  
Raquel M. Heacock ◽  
Amanda Pigeon ◽  
Gail Chermak ◽  
Frank Musiek ◽  
Jeffrey Weihing

AbstractPassive electrophysiological protocols, such as the middle latency response and speech auditory brainstem response, are often advocated in the objective assessment of central auditory processing disorder (CAPD). However, few established electrophysiological protocols exist for CAPD assessment that have patients participate in active tasks which more closely approximate real-world listening. To this end, the present study used a discrimination task (i.e., oddball paradigm) to measure an enhancement of the auditory late response (N1-P2) that occurs when participants direct their auditory attention toward speech arising from an unexpected spatial location.To establish whether N1-P2 is enhanced when auditory attention is directed toward an unexpected location during a two-word discrimination task. In addition, it was also investigated whether any enhancements in this response were contingent on the stimulus being counted as part of the oddball paradigm.Prospective study with a repeated measures design.Ten normal hearing adults, with an age range of 18–24 years.The N1 and P2 latencies and peak-to-peak amplitudes were recorded during a P300 paradigm. A series of repeated measures of analysis of variance and a correlation analysis was performed.There was a significant effect of stimulus location, in which words arising from the unexpected location showed a larger N1-P2 peak-to-peak amplitude and an earlier N1 latency. This effect was seen regardless of whether or not participants had to count the word total in memory.These findings suggest that spatial enhancement of the N1-P2 is a fairly robust phenomenon in normal hearing adult listeners. Additional studies are needed to determine whether this enhancement is absent or reduced in patients with CAPD.

2008 ◽  
Vol 19 (06) ◽  
pp. 481-495 ◽  
Author(s):  
Jeffrey Weihing ◽  
Frank E. Musiek

Background: A common complaint of patients with (central) auditory processing disorder is difficulty understanding speech in noise. Because binaural hearing improves speech understanding in compromised listening situations, quantifying this ability in different levels of noise may yield a measure with high clinical utility. Purpose: To examine binaural enhancement (BE) and binaural interaction (BI) in different levels of noise for the auditory brainstem response (ABR) and middle latency response (MLR) in a normal hearing population. Research Design: An experimental study in which subjects were exposed to a repeated measures design. Study Sample: Fifteen normal hearing female adults served as subjects. Normal hearing was assessed by pure-tone audiometry and otoacoustic emissions. Intervention: All subjects were exposed to 0, 20, and 35 dB effective masking (EM) of white noise during monotic and diotic click stimulation. Data Collection and Analysis: ABR and MLR responses were simultaneously acquired. Peak amplitudes and latencies were recorded and compared across conditions using a repeated measures analysis of variance (ANOVA). Results: For BE, ABR results showed enhancement at 0 and 20 dB EM, but not at 35 dB EM. The MLR showed BE at all noise levels, but the degree of BE decreased with increasing noise level. For BI, both the ABR and MLR showed BI at all noise levels. However, the degree of BI again decreased with increasing noise level for the MLR. Conclusions: The results demonstrate the ability to measure BE simultaneously in the ABR and MLR in up to 20 dB of EM noise and BI in up to 35 dB EM of noise. Results also suggest that ABR neural generators may respond to noise differently than MLR generators.


2017 ◽  
Vol 28 (05) ◽  
pp. 373-384
Author(s):  
Kathy R. Vander Werff ◽  
Kerrie L. Nesbitt

Background: Recent behavioral studies have suggested that individuals with sloping audiograms exhibit localized improvements in frequency discrimination in the frequency region near the drop in hearing. Auditory-evoked potentials may provide evidence of such cortical plasticity and reorganization of frequency maps. Purpose: The objective of this study was to evaluate electrophysiological evidence of cortical plasticity related to cortical frequency representation and discrimination abilities in older individuals with high-frequency sensorineural hearing loss (SNHL). It was hypothesized that the P3 response in this group would show evidence of physiological reorganization of frequency maps and enhanced neural representation at the edge of their high-frequency loss due to their restricted SNHL. Research Design: The P3 auditory event-related potential in response to small frequency changes was recorded in a repeated measures design using an oddball paradigm that presented upward and downward frequency changes of 2%, 5%, and 20% to three groups of listeners. Study Sample: P3 recordings from a group of seven older individuals with a restricted sloping hearing loss >1000 or 2000 Hz was compared to two control groups of younger (n = 7) and older (n = 7) individuals with normal hearing/borderline normal hearing through 4000 Hz. Data Collection and Analysis: The auditory P3 was recorded using an oddball paradigm (80%/20%) with the standard tone at the highest frequency of normal hearing in the hearing-impaired participants, also known as the edge frequency (EF). EFs were either 1000 or 2000 Hz for all participants. The target tones represented upward and downward frequency changes of 2%, 5%, and 20% from the standard tones of either 1000 or 2000 Hz. Waveforms were recorded using a two-channel clinical-evoked potential system. Latency and amplitude of the P300 peak were analyzed across groups for the three frequency conditions using repeated measures analysis of variance. Results: The results of this study suggest that the P3 response can be elicited by frequency changes as small as 2–5%. P3 responses at the EF of hearing loss were present and larger in amplitude for more participants with a sloping hearing loss compared to age-matched normal-hearing peers tested at the same frequencies. As a result, the older participants with sloping hearing losses had P3 responses more similar to the younger normal-hearing participants than their age-matched peers with normal hearing. Conclusions: These preliminary results partially support the idea of enhanced cortical representation of frequency at the EF of localized SNHL in older adults that is not purely due to age.


2008 ◽  
Vol 19 (06) ◽  
pp. 496-506 ◽  
Author(s):  
Richard H. Wilson ◽  
Rachel McArdle ◽  
Heidi Roberts

Background: So that portions of the classic Miller, Heise, and Lichten (1951) study could be replicated, new recorded versions of the words and digits were made because none of the three common monosyllabic word lists (PAL PB-50, CID W-22, and NU–6) contained the 9 monosyllabic digits (1–10, excluding 7) that were used by Miller et al. It is well established that different psychometric characteristics have been observed for different lists and even for the same materials spoken by different speakers. The decision was made to record four lists of each of the three monosyllabic word sets, the monosyllabic digits not included in the three sets of word lists, and the CID W-1 spondaic words. A professional female speaker with a General American dialect recorded the materials during four recording sessions within a 2-week interval. The recording order of the 582 words was random. Purpose: To determine—on listeners with normal hearing—the psychometric properties of the five speech materials presented in speech-spectrum noise. Research Design: A quasi-experimental, repeated-measures design was used. Study Sample: Twenty-four young adult listeners (M = 23 years) with normal pure-tone thresholds (≤20-dB HL at 250 to 8000 Hz) participated. The participants were university students who were unfamiliar with the test materials. Data Collection and Analysis: The 582 words were presented at four signal-to-noise ratios (SNRs; −7-, −2-, 3-, and 8-dB) in speech-spectrum noise fixed at 72-dB SPL. Although the main metric of interest was the 50% point on the function for each word established with the Spearman-Kärber equation (Finney, 1952), the percentage correct on each word at each SNR was evaluated. The psychometric characteristics of the PB-50, CID W-22, and NU–6 monosyllabic word lists were compared with one another, with the CID W-1 spondaic words, and with the 9 monosyllabic digits. Results: Recognition performance on the four lists within each of the three monosyllabic word materials were equivalent, ±0.4 dB. Likewise, word-recognition performance on the PB-50, W-22, and NU–6 word lists were equivalent, ±0.2 dB. The mean recognition performance at the 50% point with the 36 W-1 spondaic words was ˜6.2 dB lower than the 50% point with the monosyllabic words. Recognition performance on the monosyllabic digits was 1–2 dB better than mean performance on the monosyllabic words. Conclusions: Word-recognition performances on the three sets of materials (PB-50, CID W-22, and NU–6) were equivalent, as were the performances on the four lists that make up each of the three materials. Phonetic/phonemic balance does not appear to be an important consideration in the compilation of word-recognition lists used to evaluate the ability of listeners to understand speech.A companion paper examines the acoustic, phonetic/phonological, and lexical variables that may predict the relative ease or difficulty for which these monosyllable words were recognized in noise (McArdle and Wilson, this issue).


2013 ◽  
Vol 24 (01) ◽  
pp. 037-045 ◽  
Author(s):  
Shannon B. Palmer ◽  
Frank E. Musiek

Background: Normal temporal processing is important for the perception of speech in quiet and in difficult listening situations. Temporal resolution is commonly measured using a behavioral gap detection task, where the patient or subject must participate in the evaluation process. This is difficult to achieve with subjects who cannot reliably complete a behavioral test. However, recent research has investigated the use of evoked potential measures to evaluate gap detection. Purpose: The purpose of the current study was to record N1-P2 responses to gaps in broadband noise in normal hearing young adults. Comparisons were made of the N1 and P2 latencies, amplitudes, and morphology to different length gaps in noise in an effort to quantify the changing responses of the brain to these stimuli. It was the goal of this study to show that electrophysiological recordings can be used to evaluate temporal resolution and measure the influence of short and long gaps on the N1-P2 waveform. Research Design: This study used a repeated-measures design. All subjects completed a behavioral gap detection procedure to establish their behavioral gap detection threshold (BGDT). N1-P2 waveforms were recorded to the gap in a broadband noise. Gap durations were 20 msec, 2 msec above their BGDT, and 2 msec. These durations were chosen to represent a suprathreshold gap, a near-threshold gap, and a subthreshold gap. Study Sample: Fifteen normal-hearing young adult females were evaluated. Subjects were recruited from the local university community. Data Collection and Analysis: Latencies and amplitudes for N1 and P2 were compared across gap durations for all subjects using a repeated-measures analysis of variance. A qualitative description of responses was also included. Results: Most subjects did not display an N1-P2 response to a 2 msec gap, but all subjects had present clear evoked potential responses to 20 msec and 2+ msec gaps. Decreasing gap duration toward threshold resulted in decreasing waveform amplitude. However, N1 and P2 latencies remained stable as gap duration changed. Conclusions: N1-P2 waveforms can be elicited by gaps in noise in young normal-hearing adults. The responses are present as low as 2 msec above behavioral gap detection thresholds (BGDT). Gaps that are below BGDT do not generally evoke an electrophysiological response. These findings indicate that when a waveform is present, the gap duration is likely above their BGDT. Waveform amplitude is also a good index of gap detection, since amplitude decreases with decreasing gap duration. Future studies in this area will focus on various age groups and individuals with auditory disorders.


1992 ◽  
Vol 35 (3) ◽  
pp. 661-665 ◽  
Author(s):  
Gail D. Chermak ◽  
M. Janet Montgomery

The form equivalence of the Selective Auditory Attention Test (SAAT) was examined. Forty normal-hearing 6-year-old boys and girls were assigned randomly in equal numbers to one of two groups. Each group listened to four lists of words at 70 dB SPL sound field in one of two orders. Equal mean difficulty and significant correlations between lists in quiet and between lists presented with competing speech substantiate the form equivalence of the SAAT. Form equivalence analyzed for individual subjects confirmed conclusions derived from analysis of group data. A learning effect seen as improved mean performance for the second of the two lists resented in competing speech resulted from the repeated measures experimental design of the study and does not undermine the clinical viability of the SAAT.


2012 ◽  
Vol 23 (07) ◽  
pp. 501-509 ◽  
Author(s):  
Erin C. Schafer ◽  
Jody Pogue ◽  
Tyler Milrany

Background: Speech recognition abilities of adults and children using cochlear implants (CIs) are significantly degraded in the presence of background noise, making this an important area of study and assessment by CI manufacturers, researchers, and audiologists. However, at this time there are a limited number of fixed-intensity sentence recognition tests available that also have multiple, equally intelligible lists in noise. One measure of speech recognition, the AzBio Sentence Test, provides 10-talker babble on the commercially available compact disc; however, there is no published evidence to support equivalency of the 15-sentence lists in noise for listeners with normal hearing (NH) or CIs. Furthermore, there is limited or no published data on the reliability, validity, and normative data for this test in noise for listeners with CIs or NH. Purpose: The primary goals of this study were to examine the equivalency of the AzBio Sentence Test lists at two signal-to-noise ratios (SNRs) in participants with NH and at one SNR for participants with CIs. Analyses were also conducted to establish the reliability, validity, and preliminary normative data for the AzBio Sentence Test for listeners with NH and CIs. Research Design: A cross-sectional, repeated measures design was used to assess speech recognition in noise for participants with NH or CIs. Study Sample: The sample included 14 adults with NH and 12 adults or adolescents with Cochlear Freedom CI sound processors. Participants were recruited from the University of North Texas clinic population or from local CI centers. Data Collection and Analysis: Speech recognition was assessed using the 15 lists of the AzBio Sentence Test and the 10-talker babble. With the intensity of the sentences fixed at 73 dB SPL, listeners with NH were tested at 0 and −3 dB SNRs, and participants with CIs were tested at a +10 dB SNR. Repeated measures analysis of variance (ANOVA) was used to analyze the data. Results: The primary analyses revealed significant differences in performance across the 15 lists on the AzBio Sentence Test for listeners with NH and CIs. However, a follow-up analysis revealed no significant differences in performance across 10 of the 15 lists. Using the 10, equally-intelligible lists, a comparison of speech recognition performance across the two groups suggested similar performance between NH participants at a −3 dB SNR and the CI users at a +10 SNR. Several additional analyses were conducted to support the reliability and validity of the 10 equally intelligible AzBio sentence lists in noise, and preliminary normative data were provided. Conclusions: Ten lists of the commercial version of the AzBio Sentence Test may be used as a reliable and valid measure of speech recognition in noise in listeners with NH or CIs. The equivalent lists may be used for a variety of purposes including audiological evaluations, determination of CI candidacy, hearing aid and CI programming considerations, research, and recommendations for hearing assistive technology. In addition, the preliminary normative data provided in this study establishes a starting point for the creation of comprehensive normative data for the AzBio Sentence Test.


2012 ◽  
Vol 23 (02) ◽  
pp. 092-096 ◽  
Author(s):  
Richard H. Wilson ◽  
Kelly L. Watts

Background: The Words-in-Noise Test (WIN) was developed as an instrument to quantify the ability of listeners to understand monosyllabic words in background noise using multitalker babble (Wilson, 2003). The 50% point, which is calculated with the Spearman-Kärber equation (Finney, 1952), is used as the evaluative metric with the WIN materials. Initially, the WIN was designed as a 70-word instrument that presented ten unique words at each of seven signal-to-noise ratios from 24 to 0 dB in 4 dB decrements. Subsequently, the 70-word list was parsed into two 35-word lists that achieved equivalent recognition performances (Wilson and Burks, 2005). This report involves the development of a third list (WIN List 3) that was developed to serve as a practice list to familiarize the participant with listening to words presented in background babble. Purpose: To determine—on young listeners with normal hearing and on older listeners with sensorineural hearing loss—the psychometric properties of the WIN List 3 materials. Research Design: A quasi-experimental, repeated-measures design was used. Study Sample: Twenty-four young adult listeners (M = 21.6 yr) with normal pure-tone thresholds (≤20 dB HL at 250 to 8000 Hz) and 24 older listeners (M = 65.9 yr) with sensorineural hearing loss participated. Data Collection and Analysis: The level of the babble was fixed at 80 dB SPL with the level of the words varied from 104 to 80 dB SPL in 4 dB decrements. Results: For listeners with normal hearing, the 50% points for Lists 1 and 2 were similar (4.3 and 5.1 dB S/N, respectively), both of which were lower than the 50% point for List 3 (7.4 dB S/N). A similar relation was observed with the listeners with hearing loss, 50% points for Lists 1 and 2 of 12.2 and 12.4 dB S/N, respectively, compared to 15.8 dB S/N for List 3. The differences between Lists 1 and 2 and List 3 were significant. The relations among the psychometric functions and the relations among the individual data both reflected these differences. Conclusions: The significant ˜3 dB difference between performances on WIN Lists 1 and 2 and on WIN List 3 by the listeners with normal hearing and the listeners with hearing loss dictates caution with the use of List 3. The use of WIN List 3 should be reserved for ancillary purposes in which equivalent recognition performances are not required, for example, as a practice list or a stand alone measure.


2019 ◽  
Author(s):  
Karina C. De Sousa ◽  
De Wet Swanepoel ◽  
David R. Moore ◽  
Hermanus Carel Myburgh ◽  
Cas Smits

ABSTRACTObjectiveThe digits-in-noise test (DIN) has become increasingly popular as a consumer-based method to screen for hearing loss. Current versions of all DINs either test ears monaurally or present identical stimuli binaurally (i.e., diotic noise and speech, NoSo). Unfortunately, presentation of identical stimuli to each ear inhibits detection of unilateral sensorineural hearing loss (SNHL), and neither diotic nor monaural presentation sensitively detects conductive hearing loss (CHL). Following an earlier finding of enhanced sensitivity in normally hearing listeners, this study tested the hypothesis that interaural antiphasic digit presentation (NoSπ) would improve sensitivity to hearing loss caused by unilateral or asymmetric SNHL, symmetric SNHL, or CHL.DesignThis cross-sectional study, recruited adults (18-84 years) with various levels of hearing, based on a four-frequency pure tone average (PTA) at 0.5, 1, 2 and 4kHz. The study sample was comprised of listeners with normal hearing (n=41; PTA ≤ 25 dB HL in both ears), symmetric SNHL (n=57; PTA > 25 dB HL), unilateral or asymmetric SNHL (n=24; PTA > 25 dB HL in the poorer ear) and CHL (n=23; PTA > 25 dB HL and PTA air-bone gap ≥ 20 dB HL in the poorer ear). Antiphasic and diotic speech reception thresholds (SRTs) were compared using a repeated-measures design.ResultsAntiphasic DIN was significantly more sensitive to all three forms of hearing loss than the diotic DIN. SRT test-retest reliability was high for all tests (ICC r > 0.89). Area under the receiver operating characteristics (ROC) curve for detection of hearing loss (> 25 dB HL) was higher for antiphasic DIN (0.94) than for diotic DIN (0.77) presentation. After correcting for age, PTA of listeners with normal hearing or symmetric SNHL was more strongly correlated with antiphasic (rpartial[96]=0.69) than diotic (rpartial=0.54) SRTs. Slope of fitted regression lines predicting SRT from PTA was significantly steeper for antiphasic than diotic DIN. For listeners with normal hearing or CHL, antiphasic SRTs were more strongly correlated with PTA (rpartial[62]=0.92) than diotic SRTs (rpartial[62]=0.64). Slope of regression line with PTA was also significantly steeper for antiphasic than diotic DIN. Severity of asymmetric hearing loss (poorer ear PTA) was unrelated to SRT. No effect of self-reported English competence on either antiphasic or diotic DIN among the mixed first-language participants was observedConclusionsAntiphasic digit presentation markedly improved the sensitivity of the DIN test to detect SNHL, either symmetric or asymmetric, while keeping test duration to a minimum by testing binaurally. In addition, the antiphasic DIN was able to detect CHL, a shortcoming of previous monaural or binaurally diotic DIN versions. The antiphasic DIN is thus a powerful tool for population-based screening. This enhanced functionality combined with smartphone delivery could make the antiphasic DIN suitable as a primary screen that is accessible to a large global audience.


2013 ◽  
Vol 24 (01) ◽  
pp. 017-025 ◽  
Author(s):  
Karrie L. Recker ◽  
Brent W. Edwards

Background: Acceptable noise level (ANL) is a measure of the maximum amount of background noise that a listener is willing to “put up with” while listening to running speech. This test is unique in that it can predict with a high degree of accuracy who will be a successful hearing-aid wearer. Individuals who tolerate high levels of background noise are generally successful hearing-aid wearers, whereas individuals who do not tolerate background noise well are generally unsuccessful hearing-aid wearers. Purpose: Various studies have been unsuccessful in trying to relate ANLs to listener characteristics or other test results. Presumably, understanding the perceptual mechanism by which listeners determine their ANLs could provide an understanding of the ANL's unique predictive abilities and our current inability to correlate these results with other listener attributes or test results. As a first step in investigating this problem, the relationships between ANLs and other threshold measures where listeners adjust the signal-to-noise ratio (SNR) according to some criterion in a way similar to the ANL measure were examined. Research Design and Study Sample: Ten normal-hearing and 10 hearing-impaired individuals participated in a laboratory experiment that followed a within-subjects, repeated-measures design. Data Collection and Analysis: Participants were seated in a sound booth. Running speech and noise (eight-talker babble) were presented from a loudspeaker at 0°, 3 ft in front of the participant. Individuals adjusted either the level of the speech or the level of the background noise. Specifically, with the speech fixed at different levels (50, 63, 75, or 88 dBA), participants performed the ANL task, in which they adjusted the level of the background noise to the maximum level at which they were willing to listen while following the speech. With the noise fixed at different levels (50, 60, 70, or 80 dBA), participants adjusted the level of the speech to the minimum, preferred, or maximum levels at which they were willing to listen while following the speech. Additionally, for the minimum acceptable speech level task, each participant was tested at four participant-specific noise levels, based on his/her ANL results. To emphasize that the speech level was adjusted in these measurements, three new terms were coined: “minimum acceptable speech level” (MinASL), “preferred speech level” (PSL), and “maximum acceptable speech level” (MaxASL). Each condition was presented twice, and the results were averaged. Test order and presentation level were randomized. Hearing-impaired participants were tested in the aided condition only. Results: For most participants, as the presentation level increased, SNRs increased for the ANL test but decreased for the MinASL, PSL, and MaxASL tests. For a few participants, ANLs were similar to MinASLs. For most test conditions, the normal-hearing results were not significantly different from those of the hearing-impaired participants. Conclusions: For most participants, stimulus level affected the SNRs at which they were willing to listen. However, a subset of listeners was willing to listen at a constant SNR for the ANL and MinASL tests. Furthermore, for these individuals, ANLs and MinASLs were roughly equal, suggesting that these individuals may have used the same perceptual criterion for both tests.


2014 ◽  
Vol 25 (06) ◽  
pp. 576-583 ◽  
Author(s):  
Samuel R. Atcherson ◽  
Page C. Moore

Background: The middle latency response (MLR) is considered a valid clinical tool for assessing the integrity of cortical and subcortical structures. Several investigators have demonstrated that a rising frequency chirp stimulus is capable of eliciting not only larger wave V amplitudes but larger MLR components as well. However, the chirp has never been specifically examined in a hemispheric electrode montage setup that is typical for neurodiagnostic application and site-of-lesion testing. Purpose: The purpose of this study was to examine the effect of chirp, click, and toneburst stimuli on MLR waveform peak latency and peak-to-peak amplitude in a hemispheric electrode montage setup. Research Design: This study used a repeated-measures design. Study Sample: A total of 10 young adult participants (3 males, 7 females) with normal hearing were recruited and had negative histories of audiologic, otologic, and neurologic involvement, and no reported language or learning difficulties. Data Collection and Analysis: MLR latencies (Na, Pa, Nb, and Pb) and peak-to-peak amplitudes (Na-Pa, Pa-Nb, and Nb-Pb) were measured for all conditions and were statistically evaluated for left hemisphere-right ear (C3-A2) and right hemisphere-left ear (C4-A1) recordings. Results: Statistical analyses revealed no significant difference between C3-A2 and C4-A1 peak-to-peak amplitudes; therefore, data were collapsed. Stimulus comparisons revealed that Na evoked by tonebursts were statistically prolonged compared with both chirp and click, and that both Na-Pa and Pa-Nb peak-to-peak amplitudes were statistically larger for chirps compared with both clicks and tonebursts, and for clicks compared with tonebursts. Conclusions: The results of this study support the hypothesis that a chirp would offer a clinical advantage to the click and toneburst in overall peak-to-peak amplitude. As expected, normal-hearing participants did not exhibit hemispheric differences when comparing C3-A2 and C4-A1 peak-to-peak amplitudes demonstrating symmetric auditory brain function. However, chirp-evoked MLRs will require further study to determine its usefulness in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document