scholarly journals Neural signals to violations of abstract rules using speech-like stimuli

2019 ◽  
Author(s):  
Yamil Vidal ◽  
Perrine Brusini ◽  
Michela Bonfieni ◽  
Jacques Mehler ◽  
Tristan Bekinschtein

AbstractAs the evidence of predictive processes playing a role in a wide variety of cognitive domains increases, the brain as a predictive machine becomes a central idea in neuroscience. In auditory processing a considerable amount of progress has been made using variations of the Oddball design, but most of the existing work seems restricted to predictions based on physical features or conditional rules linking successive stimuli. To characterise the predictive capacity of the brain to abstract rules, we present here two experiments that use speech-like stimuli to overcome limitations and avoid common confounds. Pseudowords were presented in isolation, intermixed with infrequent deviants that contained unexpected phoneme sequences. As hypothesized, the occurrence of unexpected sequences of phonemes reliably elicited an early prediction error signal. These prediction error signals do not seemed to be modulated by attentional manipulations due to different task instructions, suggesting that the predictions are deployed even when the task at hand does not volitionally involve error detection. In contrast, the amount of syllables congruent with a standard pseudoword presented before the point of deviance exerted a strong modulation. Prediction error’s amplitude doubled when two congruent syllables were presented instead of one, despite keeping local transitional probabilities constant. This suggest that auditory predictions can be built integrating information beyond the immediate past. In sum, the results presented here further contribute to the understanding of the predictive capabilities of the human auditory system when facing complex stimuli and abstract rules.Significance StatementThe generation of predictions seem to be a prevalent brain computation. In the case of auditory processing this information is intrinsically temporal. The study of auditory predictions has been largely circumscribed to unexpected physical stimuli features or rules connecting consecutive stimuli. In contrast, our everyday experience suggest that the human auditory system is capable of more sophisticated predictions. This becomes evident in the case of speech processing, where abstract rules with long range dependencies are universal. In this article, we present two electroencephalography experiments that use speech-like stimuli to explore the predictive capabilities of the human auditory system. The results presented here increase the understanding of the ability of our auditory system to implement predictions using information beyond the immediate past.

Author(s):  
Abdollah Moossavi ◽  
Nasrin Gohari

Background and Aim: Researchers in the fields of psychoacoustic and electrophysiology are mostly focused on demonstrating the better and different neurophysiological performance of musicians. The present study explores the imp­act of music upon the auditory system, the non-auditory system as well as the improvement of language and cognitive skills following listening to music or receiving music training. Recent Findings: Studies indicate the impact of music upon the auditory processing from the cochlea to secondary auditory cortex and other parts of the brain. Besides, the impact of music on speech perception and other cognitive proce­ssing is demonstrated. Some papers point to the bottom-up and some others to the top-down pro­cessing, which is explained in detail. Conclusion: Listening to music and receiving music training, in the long run, creates plasticity from the cochlea to the auditory cortex. Since the auditory path of musical sounds overlaps functionally with that of speech path, music hel­ps better speech perception, too. Both percep­tual and cognitive functions are involved in this process. Music engages a large area of the brain, so music can be used as a supplement in rehabi­litation programs and helps the improvement of speech and language skills.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


2013 ◽  
Vol PP (99) ◽  
pp. 1-18 ◽  

In recent years, a number of feature extraction procedures for automatic speech recognition (ASR) systems have been based on models of human auditory processing, and one often hears arguments in favor of implementing knowledge of human auditory perception and cognition into machines for ASR. This paper takes a reverse route, and argues that the engineering techniques for automatic recognition of speech that are already in widespread use are often consistent with some well-known properties of the human auditory system.


2021 ◽  
Author(s):  
Luis M. Rivera-Perez ◽  
Julia T. Kwapiszewski ◽  
Michael T. Roberts

AbstractThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input from the pontomesencephalic tegmentum. Activation of nicotinic acetylcholine receptors (nAChRs) in the IC can alter acoustic processing and enhance auditory task performance. However, how nAChRs affect the excitability of specific classes of IC neurons remains unknown. Recently, we identified vasoactive intestinal peptide (VIP) neurons as a distinct class of glutamatergic principal neurons in the IC. Here, in experiments using male and female mice, we show that cholinergic terminals are routinely located adjacent to the somas and dendrites of VIP neurons. Using whole-cell electrophysiology in brain slices, we found that acetylcholine drives surprisingly strong and long-lasting excitation and inward currents in VIP neurons. This excitation was unaffected by the muscarinic receptor antagonist atropine. Application of nAChR antagonists revealed that acetylcholine excites VIP neurons mainly via activation of α3β4* nAChRs, a nAChR subtype that is rare in the brain. Furthermore, we show that cholinergic excitation is intrinsic to VIP neurons and does not require activation of presynaptic inputs. Lastly, we found that low frequency trains of acetylcholine puffs elicited temporal summation in VIP neurons, suggesting that in vivo-like patterns of cholinergic input can reshape activity for prolonged periods. These results reveal the first cellular mechanisms of nAChR regulation in the IC, identify a functional role for α3β4* nAChRs in the auditory system, and suggest that cholinergic input can potently influence auditory processing by increasing excitability in VIP neurons and their postsynaptic targets.Key points summaryThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input and expresses a variety of nicotinic acetylcholine receptor (nAChR) subunits.In vivo activation of nAChRs alters the input-output functions of IC neurons and influences performance in auditory tasks. However, how nAChR activation affects the excitability of specific IC neuron classes remains unknown.Here we show in mice that cholinergic terminals are located adjacent to the somas and dendrites of VIP neurons, a class of IC principal neurons.We find that acetylcholine elicits surprisingly strong, long-lasting excitation of VIP neurons and this is mediated mainly through activation of α3β4* nAChRs, a subtype that is rare in the brain.Our data identify a role for α3β4* nAChRs in the central auditory pathway and reveal a mechanism by which cholinergic input can influence auditory processing in the IC and the postsynaptic targets of VIP neurons.


2019 ◽  
Author(s):  
Gloria G Parras ◽  
Catalina Valdés-Baizabal ◽  
Lauren Harms ◽  
Patricia Michie ◽  
Manuel S Malmierca

ABSTRACTEfficient sensory processing requires that the brain is able to maximize its response to unexpected stimuli, while suppressing responsivity to expected events. Mismatch negativity (MMN) is an auditory event-related potential that occurs when a regular pattern is interrupted by an event that violates the expected properties of the pattern. MMN has been found to be reduced in individuals with schizophrenia in over 100 separate studies, an effect believed to be underpinned by glutamate N-methyl-D-aspartate receptor (NMDA-R) dysfunction, as it is observed that NMDA-R antagonists also reduce MMN in healthy volunteers. The aim of the current study is to examine this effect in rodents. Using single unit recording in specific auditory areas using methods not readily utilized in humans, we have previously demonstrated that neuronal indices of rodent mismatch responses recorded from thalamic and cortical areas of the brain can be decomposed into a relatively simple repetition suppression and a more sophisticated prediction error process. In the current study, we aimed to test how the NMDA-R antagonist, MK-801, affected both of these processes along the rat auditory thalamocortical pathway. We found that MK-801 had the opposite effect than expected, and enhanced thalamic repetition suppression and cortical prediction error. These single unit data correlate with the recordings of local field responses. Together with previous data, this study suggests that our understanding of the contribution of NMDA-R system to MMN generation is far from complete, and also has potential implications for future research in schizophrenia.Significance StatementIn this study, we demonstrate that an NMDA-R antagonist, MK-801, differentially affects single neuron responses to auditory stimuli along the thalamocortical axis by increasing the response magnitude of unexpected events in the auditory cortex and intensifying the adaptation of responses to expected events in the thalamus. Thus, we provide evidence that NMDA-R antagonists alter the balance between prediction-error and repetition suppression processes that underlie the generation of mismatch responses in the brain, and these effects are differentially expressed at different levels of auditory processing. As effects of MK-801 were in the opposite direction to our expectations, it demonstrates that our understanding of role of NMDA-R in synaptic plasticity and the neural processes underpinning MMN generation are far from complete.


2007 ◽  
Vol 363 (1493) ◽  
pp. 1023-1035 ◽  
Author(s):  
Roy D Patterson ◽  
Ingrid S Johnsrude

In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds. Section 4 describes the general auditory mechanisms that we believe are applied to all communication sounds, and how functional neuroimaging is being used to map the brain networks associated with domain-general auditory processing. Section 5 discusses recent neuroimaging studies that explore where such general processes give way to those that are specific to human speech and language.


2018 ◽  
Author(s):  
Gianpaolo Demarchi ◽  
Gaëtan Sanchez ◽  
Nathan Weisz

AbstractPrior experience shapes sensory perception by enabling the formation of expectations with regards to the occurrence of upcoming sensory events. Especially in the visual modality, an increasing number of studies show that prediction-related neural signals carry feature-specific information about the stimulus. This is less established in the auditory modality, in particular without bottom-up signals driving neural activity. We studied whether auditory predictions are sharply tuned to even carry tonotopic specific information. For this purpose, we conducted a Magnetoencephalography (MEG) experiment in which participants passively listened to sound sequences that varied in their regularity (i.e. entropy). Sound presentations were temporally predictable (3 Hz rate), but were occasionally omitted. Training classifiers on the random (high entropy) sound sequence and applying them to all conditions in a time-generalized manner, allowed us to assess whether and how carrier frequency specific information in the MEG signal is modulated according to the entropy level. We show that especially in an ordered (most predictable) sensory context neural activity during the anticipatory and omission periods contains carrier-frequency specific information. Overall our results illustrate in the human auditory system that prediction-related neural activity can be tuned in a tonotopically specific manner.


Sign in / Sign up

Export Citation Format

Share Document