Task-related preparatory modulations multiply with acoustic processing in monkey auditory cortex

2014 ◽  
Vol 39 (9) ◽  
pp. 1538-1550 ◽  
Author(s):  
Roohollah Massoudi ◽  
Marc M. Van Wanrooij ◽  
Sigrid M. C. I. Van Wetter ◽  
Huib Versnel ◽  
A. John Van Opstal
2009 ◽  
Vol 102 (5) ◽  
pp. 2638-2656 ◽  
Author(s):  
Hiroki Asari ◽  
Anthony M. Zador

Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to ≥4 s (τ ∼ 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted ≳100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant τ <100 ms, indicating that the long-lasting form originated from presynaptic mechanisms in the cortex, such as synaptic depression. Restricting knowledge of the stimulus history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation.


2000 ◽  
Vol 12 (3) ◽  
pp. 449-460 ◽  
Author(s):  
G. Dehaene-Lambertz

Early cerebral specialization and lateralization for auditory processing in 4-month-old infants was studied by recording high-density evoked potentials to acoustical and phonetic changes in a series of repeated stimuli (either tones or syllables). Mismatch responses to these stimuli exhibit a distinct topography suggesting that different neural networks within the temporal lobe are involved in the perception and representation of the different features of an auditory stimulus. These data confirm that specialized modules are present within the auditory cortex very early in development. However, both for syllables and continuous tones, higher voltages were recorded over the left hemisphere than over the right with no significant interaction of hemisphere by type of stimuli. This suggests that there is no greater left hemisphere involvement in phonetic processing than in acoustic processing during the first months of life.


Author(s):  
Kevin D. Prinsloo ◽  
Edmund C. Lalor

AbstractIn recent years research on natural speech processing has benefited from recognizing that low frequency cortical activity tracks the amplitude envelope of natural speech. However, it remains unclear to what extent this tracking reflects speech-specific processing beyond the analysis of the stimulus acoustics. In the present study, we aimed to disentangle contributions to cortical envelope tracking that reflect general acoustic processing from those that are functionally related to processing speech. To do so, we recorded EEG from subjects as they listened to “auditory chimeras” – stimuli comprised of the temporal fine structure (TFS) of one speech stimulus modulated by the amplitude envelope (ENV) of another speech stimulus. By varying the number of frequency bands used in making the chimeras, we obtained some control over which speech stimulus was recognized by the listener. No matter which stimulus was recognized, envelope tracking was always strongest for the ENV stimulus, indicating a dominant contribution from acoustic processing. However, there was also a positive relationship between intelligibility and the tracking of the perceived speech, indicating a contribution from speech specific processing. These findings were supported by a follow-up analysis that assessed envelope tracking as a function of the (estimated) output of the cochlea rather than the original stimuli used in creating the chimeras. Finally, we sought to isolate the speech-specific contribution to envelope tracking using forward encoding models and found that indices of phonetic feature processing tracked reliably with intelligibility. Together these results show that cortical speech tracking is dominated by acoustic processing, but also reflects speech-specific processing.This work was supported by a Career Development Award from Science Foundation Ireland (CDA/15/3316) and a grant from the National Institute on Deafness and Other Communication Disorders (DC016297). The authors thank Dr. Aaron Nidiffer, Dr. Aisling O’Sullivan, Thomas Stoll and Lauren Szymula for assistance with data collection, and Dr. Nathaniel Zuk, Dr. Aaron Nidiffer, Dr. Aisling O’Sullivan for helpful comments on this manuscript.Significance StatementActivity in auditory cortex is known to dynamically track the energy fluctuations, or amplitude envelope, of speech. Measures of this tracking are now widely used in research on hearing and language and have had a substantial influence on theories of how auditory cortex parses and processes speech. But, how much of this speech tracking is actually driven by speech-specific processing rather than general acoustic processing is unclear, limiting its interpretability and its usefulness. Here, by merging two speech stimuli together to form so-called auditory chimeras, we show that EEG tracking of the speech envelope is dominated by acoustic processing, but also reflects linguistic analysis. This has important implications for theories of cortical speech tracking and for using measures of that tracking in applied research.


2018 ◽  
Vol 38 (36) ◽  
pp. 7822-7832 ◽  
Author(s):  
Michelle Moerel ◽  
Federico De Martino ◽  
Kâmil Uğurbil ◽  
Elia Formisano ◽  
Essa Yacoub

2005 ◽  
Vol 84 (01) ◽  
Author(s):  
P Benesová ◽  
M Langmeier ◽  
J Betka ◽  
S Trojan
Keyword(s):  

2020 ◽  
Vol 140 (7) ◽  
pp. 762-768
Author(s):  
Yoshiki Aizawa ◽  
Nina Pilyugina ◽  
Akihiko Tsukahara ◽  
Keita Tanaka

Sign in / Sign up

Export Citation Format

Share Document