scholarly journals Classical and controlled auditory mismatch responses to multiple physical deviances in anaesthetised and conscious mice

2019 ◽  
Author(s):  
Jamie A. O’Reilly ◽  
Bernard A. Conway

AbstractHuman mismatch negativity (MMN) is modelled in rodents and other non-human species to examine its underlying neurological mechanisms, primarily described in terms of deviance-detection and adaptation. Using the mouse model, we aim to elucidate subtle dependencies between the mismatch response (MMR) and different physical properties of sound. Epidural field potentials were recorded from urethane-anaesthetised and conscious mice during oddball and many-standards control paradigms; with stimuli varying in duration, frequency, intensity, and inter-stimulus interval. Resulting auditory evoked potentials, classical MMR (oddball – standard), and controlled MMR (oddball – control) waveforms were analysed. Stimulus duration correlated with stimulus-off response peak latency (p < 0.0001). Frequency (p < 0.0001), intensity (p < 0.0001), and inter-stimulus interval (p < 0.0001) correlated with stimulu-son N1 and P1 (conscious only) peak amplitudes. These relationships were instrumental in shaping classical MMR morphology in both anaesthetised and conscious animals, suggesting these waveforms reflect modification of normal auditory processing by different physical properties of stimuli. Controlled MMR waveforms appeared to exhibit habituation to auditory stimulation over time, which was equally observed in response to oddball and standard stimuli. These observations are not consistent with the mechanisms thought to underlie human MMN, which currently do not address differences due to specific physical features of auditory deviance. Thus, no evidence was found to objectively support the deviance-detection or adaptation hypotheses of MMN in relation to anaesthetised or conscious mice.

1988 ◽  
Vol 15 (2) ◽  
pp. 173-178 ◽  
Author(s):  
Sanford E. Gerber ◽  
Traci K. Davis ◽  
Kathleen M. Mastrini

2021 ◽  
Vol 64 (10) ◽  
pp. 4014-4029
Author(s):  
Kathy R. Vander Werff ◽  
Christopher E. Niemczak ◽  
Kenneth Morse

Purpose Background noise has been categorized as energetic masking due to spectrotemporal overlap of the target and masker on the auditory periphery or informational masking due to cognitive-level interference from relevant content such as speech. The effects of masking on cortical and sensory auditory processing can be objectively studied with the cortical auditory evoked potential (CAEP). However, whether effects on neural response morphology are due to energetic spectrotemporal differences or informational content is not fully understood. The current multi-experiment series was designed to assess the effects of speech versus nonspeech maskers on the neural encoding of speech information in the central auditory system, specifically in terms of the effects of speech babble noise maskers varying by talker number. Method CAEPs were recorded from normal-hearing young adults in response to speech syllables in the presence of energetic maskers (white or speech-shaped noise) and varying amounts of informational maskers (speech babble maskers). The primary manipulation of informational masking was the number of talkers in speech babble, and results on CAEPs were compared to those of nonspeech maskers with different temporal and spectral characteristics. Results Even when nonspeech noise maskers were spectrally shaped and temporally modulated to speech babble maskers, notable changes in the typical morphology of the CAEP in response to speech stimuli were identified in the presence of primarily energetic maskers and speech babble maskers with varying numbers of talkers. Conclusions While differences in CAEP outcomes did not reach significance by number of talkers, neural components were significantly affected by speech babble maskers compared to nonspeech maskers. These results suggest an informational masking influence on neural encoding of speech information at the sensory cortical level of auditory processing, even without active participation on the part of the listener.


Author(s):  
Dongxin Liu ◽  
Jiong Hu ◽  
Ruijuan Dong ◽  
Jing Chen ◽  
Gabriella Musacchia ◽  
...  

2013 ◽  
Author(s):  
Zacharias Vamvakousis ◽  
Rafael Ramirez

P300-based brain-computer interfaces (BCIs) are especially useful for people with illnesses, which prevent them from communicating in a normal way (e.g. brain or spinal cord injury). However, most of the existing P300-based BCI systems use visual stimulation which may not be suitable for patients with sight deterioration (e.g. patients suffering from amyotrophic lateral sclerosis). Moreover, P300-based BCI systems rely on expensive equipment, which greatly limits their use outside the clinical environment. Therefore, we propose a multi-class BCI system based solely on auditory stimuli, which makes use of low-cost EEG technology. We explored different combinations of timbre, pitch and spatial auditory stimuli (TimPiSp: timbre-pitch-spatial, TimSp: timbre-spatial, and Timb: timbre-only) and three inter-stimulus intervals (150ms, 175ms and 300ms), and evaluated our system by conducting an oddball task on 7 healthy subjects. This is the first study in which these 3 auditory cues are compared. After averaging several repetitions in the 175ms inter-stimulus interval, we obtained average selection accuracies of 97.14%, 91.43%, and 88.57% for modalities TimPiSp, TimSp, and Timb, respectively. Best subject’s accuracy was 100% in all modalities and inter-stimulus intervals. Average information transfer rate for the 150ms inter-stimulus interval in the TimPiSp modality was 14.85 bits/min. Best subject’s information transfer rate was 39.96 bits/min for 175ms Timbre condition. Based on the TimPiSp modality, an auditory P300 speller was implemented and evaluated by asking users to type a 12-characters-long phrase. Six out of 7 users completed the task. The average spelling speed was 0.56 chars/min and best subject’s performance was 0.84 chars/min. The obtained results show that the proposed auditory BCI is successful with healthy subjects and may constitute the basis for future implementations of more practical and affordable auditory P300-based BCI systems.


2021 ◽  
Author(s):  
Shannon L.M. Heald ◽  
Stephen C. Van Hedger ◽  
John Veillette ◽  
Katherine Reis ◽  
Joel S. Snyder ◽  
...  

AbstractThe ability to generalize rapidly across specific experiences is vital for robust recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Behavioral research has demonstrated that listeners are rapidly able to generalize their experience with a talker’s speech and quickly improve understanding of a difficult-to-understand talker without prolonged practice, e.g., even after a single training session. Here, we examine the differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker using a Pretest-Posttest design with electroencephalography (EEG). Participants were trained using either (1) a large inventory of words where no words repeated across the experiment (generalized learning) or (2) a small inventory of words where words repeated (rote learning). Analysis of long-latency auditory evoked potentials at Pretest and Posttest revealed that while rote and generalized learning both produce rapid changes in auditory processing, the nature of these changes differed. In the context of adapting to a talker, generalized learning is marked by an amplitude reduction in the N1-P2 complex and by the presence of a late-negative (LN) wave in the auditory evoked potential following training. Rote learning, however, is marked only by temporally later source configuration changes. The early N1-P2 change, found only for generalized learning, suggests that generalized learning relies on the attentional system to reorganize the way acoustic features are selectively processed. This change in relatively early sensory processing (i.e. during the first 250ms) is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to adaptively tune early auditory processing sensitivity.Statement of SignificancePrevious research on perceptual learning has typically examined neural responses during rote learning: training and testing is carried out with the same stimuli. As a result, it is not clear that findings from these studies can explain learning that generalizes to novel patterns, which is critical in speech perception. Are neural responses to generalized learning in auditory processing different from neural responses to rote learning? Results indicate rote learning of a particular talker’s speech involves brain regions focused on the memory encoding and retrieving of specific learned patterns, whereas generalized learning involves brain regions involved in reorganizing attention during early sensory processing. In learning speech from a novel talker, only generalized learning is marked by changes in the N1-P2 complex (reflective of secondary auditory cortical processing). The results are consistent with the view that robust speech perception relies on the fast adjustment of attention mechanisms to adaptively tune auditory sensitivity to cope with acoustic variability.


Author(s):  
Pamela Papile Lunardelo ◽  
Marisa Tomoe Hebihara Fukuda ◽  
Patricia Aparecida Zuanetti ◽  
Ângela Cristina Pontes-Fernandes ◽  
Marita Iannazzo Ferretti ◽  
...  

2020 ◽  
Vol 24 (04) ◽  
pp. e399-e406
Author(s):  
Joyce Miranda Santiago ◽  
Cyntia Barbosa Laureano Luiz ◽  
Michele Garcia ◽  
Daniela Gil

Abstract Introduction The auditory structures of the brainstem are involved in binaural interaction, which contributes to sound location and auditory figure-background perception. Objective To investigate the performance of young adults in the masking level difference (MLD) test, brainstem auditory-evoked potentials (BAEPs) with click stimulus, and frequency-following response (FFR), as well as to verify the correlation between the findings, considering the topographic origin of the components of these procedures. Methods A total of 20 female subjects between 18 and 30 years of age, with normal hearing and no complaints concerning central auditory processing underwent a basic audiological evaluation, as well as the MLD test, BAEP and FFR. Results The mean result on the MLD test was of 10.70 dB. There was a statistically significant difference in the absolute latencies of waves I, III and V in the BAEPs of the ears. A change in the FFR characterized by the absence of the C, E and F waves was noticed. There was a statistically significant difference in the positive correlation of wave V in the BAEPs with the MLD. There was a statistically significant difference in the positive correlation of the mean MLD and the V, A and F components of the FFR. Conclusion The mean MLD was adequate. In the BAEPs, we observed that the click stimulus transmission occurred faster in the right ear. The FFR showed absence of some components. The mean MLD correlated positively with the BAEPs and FFR.


1961 ◽  
Vol 13 (1) ◽  
pp. 15-18 ◽  
Author(s):  
P. D. McCormack ◽  
A. W. Prysiazniuk

Sign in / Sign up

Export Citation Format

Share Document