scholarly journals Converging intracortical signatures of two separated processing timescales in human early auditory cortex

2019 ◽  
Author(s):  
Fabiano Baroni ◽  
Benjamin Morillon ◽  
Agnès Trébuchon ◽  
Catherine Liégeois-Chauvel ◽  
Itsaso Olasagasti ◽  
...  

AbstractNeural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.Author summaryLike most animal vocalizations, speech results from a pseudo-rhythmic process that reflects the convergence of motor and auditory neural substrates and the natural resonance properties of the vocal apparatus towards efficient communication. Here, we leverage the excellent temporal and spatial resolution of intracranial EEG to demonstrate that neural activity in human early auditory cortical areas during speech perception exhibits a dual-scale spectral profile of power changes, with speech increasing power in low (delta-theta) and high (gamma - high-gamma) frequency ranges, while decreasing power in intermediate (alpha-beta) frequencies. Single-trial multivariate decoding also resulted in a bimodal spectral profile of information content, with better decoding at low and high frequencies than at intermediate ones. From both spectral and informational perspectives, these patterns are consistent with the activity of a relatively simple computational model comprising two reciprocally connected excitatory/inhibitory sub-networks operating at different (low and high) timescales. By combining experimental, decoding and modeling approaches, we provide consistent evidence for the existence, information coding value and underlying neuronal architecture of dual timescale processing in human auditory cortex.

PLoS ONE ◽  
2015 ◽  
Vol 10 (9) ◽  
pp. e0137915 ◽  
Author(s):  
Rick L. Jenison ◽  
Richard A. Reale ◽  
Amanda L. Armstrong ◽  
Hiroyuki Oya ◽  
Hiroto Kawasaki ◽  
...  

2005 ◽  
Vol 93 (1) ◽  
pp. 210-222 ◽  
Author(s):  
Michael P. Harms ◽  
John J. Guinan ◽  
Irina S. Sigalovsky ◽  
Jennifer R. Melcher

Functional magnetic resonance imaging (fMRI) of human auditory cortex has demonstrated a striking range of temporal waveshapes in responses to sound. Prolonged (30 s) low-rate (2/s) noise burst trains elicit “sustained” responses, whereas high-rate (35/s) trains elicit “phasic” responses with peaks just after train onset and offset. As a step toward understanding the significance of these responses for auditory processing, the present fMRI study sought to resolve exactly which features of sound determine cortical response waveshape. The results indicate that sound temporal envelope characteristics, but not sound level or bandwidth, strongly influence response waveshapes, and thus the underlying time patterns of neural activity. The results show that sensitivity to sound temporal envelope holds in both primary and nonprimary cortical areas, but nonprimary areas show more pronounced phasic responses for some types of stimuli (higher-rate trains, continuous noise), indicating more prominent neural activity at sound onset and offset. It has been hypothesized that the neural activity underlying the onset and offset peaks reflects the beginning and end of auditory perceptual events. The present data support this idea because sound temporal envelope, the sound characteristic that most strongly influences whether fMRI responses are phasic, also strongly influences whether successive stimuli (e.g., the bursts of a train) are perceptually grouped into a single auditory event. Thus fMRI waveshape may provide a window onto neural activity patterns that reflect the segmentation of our auditory environment into distinct, meaningful events.


1998 ◽  
Vol 35 (3) ◽  
pp. 283-292 ◽  
Author(s):  
MARTY G. WOLDORFF ◽  
STEVEN A. HILLYARD ◽  
CHRIS C. GALLEN ◽  
SCOTT R. HAMPSON ◽  
FLOYD E. BLOOM

2013 ◽  
Vol 110 (9) ◽  
pp. 2163-2174 ◽  
Author(s):  
Juan M. Abolafia ◽  
M. Martinez-Garcia ◽  
G. Deco ◽  
M. V. Sanchez-Vives

Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Liping Yu ◽  
Jiawei Hu ◽  
Chenlin Shi ◽  
Li Zhou ◽  
Maozhi Tian ◽  
...  

Working memory (WM), the ability to actively hold information in memory over a delay period of seconds, is a fundamental constituent of cognition. Delay-period activity in sensory cortices has been observed in WM tasks, but whether and when the activity plays a functional role for memory maintenance remains unclear. Here we investigated the causal role of auditory cortex (AC) for memory maintenance in mice performing an auditory WM task. Electrophysiological recordings revealed that AC neurons were active not only during the presentation of the auditory stimulus but also early in the delay period. Furthermore, optogenetic suppression of neural activity in AC during the stimulus epoch and early delay period impaired WM performance, whereas suppression later in the delay period did not. Thus, AC is essential for information encoding and maintenance in auditory WM task, especially during the early delay period.


Sign in / Sign up

Export Citation Format

Share Document