scholarly journals Temporal Dynamics of Adaptation to Natural Sounds in the Human Auditory Cortex

2007 ◽  
Vol 18 (6) ◽  
pp. 1350-1360 ◽  
Author(s):  
C. F. Altmann ◽  
H. Nakata ◽  
Y. Noguchi ◽  
K. Inui ◽  
M. Hoshiyama ◽  
...  
2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.


2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.


2016 ◽  
Vol 45 ◽  
pp. 10-22 ◽  
Author(s):  
Björn Herrmann ◽  
Molly J. Henry ◽  
Ingrid S. Johnsrude ◽  
Jonas Obleser

2013 ◽  
Vol 33 (29) ◽  
pp. 11888-11898 ◽  
Author(s):  
M. Moerel ◽  
F. De Martino ◽  
R. Santoro ◽  
K. Ugurbil ◽  
R. Goebel ◽  
...  

2019 ◽  
Author(s):  
Xiangbin Teng ◽  
David Poeppel

AbstractNatural sounds have broadband modulation spectra and contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al., 2017) demonstrated a temporal coding preference in the auditory system for the theta (4 – 7 Hz) and gamma (30 – 45 Hz) ranges, but it remains unclear how acoustic dynamics between these two ranges is encoded. Here we generated artificial sounds with temporal structures over timescales from ~200 ms to ~30 ms and investigated temporal coding on different timescales in the human auditory cortex. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography (MEG) recording. The data show robust neural entrainment in the theta and the gamma bands, but not in the alpha and beta bands. Classification analyses as well as stimulus reconstruction reveal that the acoustic information of all timescales can be differentiated through the theta and gamma bands, but the acoustic dynamics in the theta and gamma ranges are preferentially encoded. We replicate earlier findings of multi-time scale processing and further demonstrate that the theta and gamma bands show generality of temporal coding across all timescales with comparable capacity. The results support the hypothesis that the human auditory cortex primarily encodes auditory information employing neural processes within two discrete temporal regimes.SignificanceNatural sounds contain rich acoustic dynamics over wide-ranging timescales, but perceptually relevant regularities often occupy specific temporal ranges. For instance, speech carries phonemic information on a shorter timescale than syllabic information at ~ 200 ms. How does the brain efficiently ‘sample’ continuous acoustic input to perceive temporally structured sounds? We presented sounds with temporal structures at different timescales and measured cortical entrainment using magnetoencephalography. We found, unexpectedly, that the human auditory system preserves high temporal coding precision on two non-overlapping timescales, the slower (theta) and faster (gamma) bands, to track acoustic dynamics over all timescales. The results suggest that the acoustic environment which we experience as seamless and continuous is segregated by discontinuous neural processing, or ‘sampled.’


2014 ◽  
Vol 10 (1) ◽  
pp. e1003412 ◽  
Author(s):  
Roberta Santoro ◽  
Michelle Moerel ◽  
Federico De Martino ◽  
Rainer Goebel ◽  
Kamil Ugurbil ◽  
...  

NeuroImage ◽  
2007 ◽  
Vol 35 (3) ◽  
pp. 1192-1200 ◽  
Author(s):  
Christian F. Altmann ◽  
Christoph Bledowski ◽  
Michael Wibral ◽  
Jochen Kaiser

Sign in / Sign up

Export Citation Format

Share Document