The Role of Event-Related Brain Potentials in Assessing Central Auditory Processing

2007 ◽  
Vol 18 (07) ◽  
pp. 573-589 ◽  
Author(s):  
Claude Alain ◽  
Kelly Tremblay

The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound. La percepción de señales acústicas complejas, tales como el lenguaje y la música, dependen de la interacción entre el procesamiento auditivo central y periférico. Conforme la información viaja de la cóclea a la corteza auditiva primaria y de asociación, el sonido entrante se somete un análisis progresivamente más detallado y refinado. Se cree que estos varios niveles de análisis incluyen procesos automáticos de bajo nivel que detectan, discriminan y agrupan los sonidos que son similares en cuanto a los atributos físicos, como la frecuencia, la intensidad y la localización, así como procesos dirigidos por esquemas de más alto nivel, que reflejan la experiencia y el conocimiento del sujeto del ambiente auditivo. En esta revisión, describimos estudios que han utilizado potenciales cerebrales relacionados con el evento, para investigar el procesamiento de señales acústicas complejas (p.e., lenguaje, música). En particular, examinamos el papel de las pérdidas auditivas sobre la representación neural del sonido y de cómo los factores cognitivos y el aprendizaje pueden ayudar a compensar las dificultades perceptivas. La noción de un análisis de la escena auditiva se utiliza como un marco conceptual, para interpretar y estudiar la percepción del sonido.

2020 ◽  
Author(s):  
Fabrice Bardy

AbstractA novel experimental paradigm, “deconvolution of ears’ activity” (DEA), is presented which allows to disentangle overlapping neural activity from both auditory cortices when two auditory stimuli are presented closely together in time in each ear.Pairs of multi-tone complexes were presented either binaurally, or sequentially by alternating presentation order in each ear (i.e., first tone complex of the pair presented to one ear and second tone complex to the other ear), using stimulus onset asynchronies (SOAs) shorter than the neural response length. This timing strategy creates overlapping responses, which can be mathematically separated using least-squares deconvolution.The DEA paradigm allowed the evaluation of the neural representation in the auditory cortex of responses to stimuli presented at syllabic rates (i.e., SOAs between 120 and 260 ms). Analysis of the neuromagnetic responses in each cortex offered a sensitive technique to study hemispheric lateralization, ear representation (right versus left), pathway advantage (contra- versus ipsi-lateral) and cortical binaural interaction.To provide a proof-of-concept of the DEA paradigm, data was recorded from three normal-hearing adults. Results showed good test-retest reliability, and indicated that the difference score between hemispheres can potentially be used to assess central auditory processing. This suggests that the method could be a potentially valuable tool for generating an objective “auditory profile” by assessing individual fine-grained auditory processing using a non-invasive recording method.


Author(s):  
Hui Sun ◽  
Kazuya Saito ◽  
Adam Tierney

Abstract Precise auditory perception at a subcortical level (neural representation and encoding of sound) has been suggested as a form of implicit L2 aptitude in naturalistic settings. Emerging evidence suggests that such implicit aptitude explains some variance in L2 speech perception and production among adult learners with different first language backgrounds and immersion experience. By examining 46 Chinese learners of English, the current study longitudinally investigated the extent to which explicit and implicit auditory processing ability could predict L2 segmental and prosody acquisition over a 5-month early immersion. According to the results, participants’ L2 gains were associated with more explicit and integrative auditory processing ability (remembering and reproducing music sequences), while the role of implicit, preconscious perception appeared to be negligible at the initial stage of postpubertal L2 speech learning.


2014 ◽  
Vol 78 (3) ◽  
pp. 361-378 ◽  
Author(s):  
Mona Isabel Spielmann ◽  
Erich Schröger ◽  
Sonja A. Kotz ◽  
Alexandra Bendixen

2015 ◽  
Vol 370 (1664) ◽  
pp. 20140089 ◽  
Author(s):  
Laurel J. Trainor

Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory–motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.


2021 ◽  
Vol 44 ◽  
Author(s):  
Laurel J. Trainor

Abstract The evolutionary origins of complex capacities such as musicality are not simple, and likely involved many interacting steps of musicality-specific adaptations, exaptations, and cultural creation. A full account of the origins of musicality needs to consider the role of ancient adaptations such as credible singing, auditory scene analysis, and prediction-reward circuits in constraining the emergence of musicality.


2007 ◽  
Vol 21 (3-4) ◽  
pp. 164-175 ◽  
Author(s):  
Elyse S. Sussman

The question of whether the mismatch negativity (MMN) is modulated by attention has been debated for over a decade. Although the MMN is widely regarded as reflecting a preattentive auditory process, many studies have shown attention effects on MMN. So, what does preattentive mean if attention can modulate the MMN? To understand the function of MMN in auditory processing, we need to shed new light on the “MMN and attention” debate. This review will discuss the apparent paradox that MMN can be modulated by attention and still be considered an attention-independent process, and provide a new framework for viewing the MMN system. The new model proposes that the principal factor governing MMN is the sound context. MMN generation relies on multiple processing mechanisms that are part of a larger system of auditory scene analysis.


2013 ◽  
Vol 109 (3) ◽  
pp. 721-733 ◽  
Author(s):  
Jason V. Thompson ◽  
James M. Jeanne ◽  
Timothy Q. Gentner

Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals.


eLife ◽  
2013 ◽  
Vol 2 ◽  
Author(s):  
Andrew R Dykstra ◽  
Alexander Gutschalk

Using computational models and stimuli that resemble natural acoustic signals, auditory scientists explore how we segregate competing streams of sound.


Sign in / Sign up

Export Citation Format

Share Document