scholarly journals Neural correlates of learning pure tones versus natural sounds in the auditory cortex

2018 ◽  
Author(s):  
Ido Maor ◽  
Ravid Shwartz-Ziv ◽  
Libi Feigin ◽  
Yishai Elyada ◽  
Haim Sompolinsky ◽  
...  

ABSTRACTAuditory perceptual learning of pure tones causes tonotopic map expansion in the primary auditory cortex (A1), but the function this plasticity sub-serves is unclear. We developed an automated training platform called the ‘Educage’, which was used to train mice on a go/no-go auditory discrimination task to their perceptual limits, for difficult discriminations among pure tones or natural sounds. Spiking responses of excitatory and inhibitory L2/3 neurons in mouse A1 revealed learning-induced overrepresentation of the learned frequencies, in accordance with previous literature. Using a novel computational model to study auditory tuning curves we show that overrepresentation does not necessarily improve discrimination performance of the network to the learned tones. In contrast, perceptual learning of natural sounds induced ‘sparsening’ and decorrelation of the neural response, and consequently improving discrimination of these complex sounds. The signature of plasticity in A1 highlights its central role in coding natural sounds as compared to pure tones.

Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


2020 ◽  
Vol 13 ◽  
Author(s):  
Ido Maor ◽  
Ravid Shwartz-Ziv ◽  
Libi Feigin ◽  
Yishai Elyada ◽  
Haim Sompolinsky ◽  
...  

2008 ◽  
Vol 100 (3) ◽  
pp. 1622-1634 ◽  
Author(s):  
Ling Qin ◽  
JingYu Wang ◽  
Yu Sato

Previous studies in anesthetized animals reported that the primary auditory cortex (A1) showed homogenous phasic responses to FM tones, namely a transient response to a particular instantaneous frequency when FM sweeps traversed a neuron's tone-evoked receptive field (TRF). Here, in awake cats, we report that A1 cells exhibit heterogeneous FM responses, consisting of three patterns. The first is continuous firing when a slow FM sweep traverses the receptive field of a cell with a sustained tonal response. The duration and amplitude of FM response decrease with increasing sweep speed. The second pattern is transient firing corresponding to the cell's phasic tonal response. This response could be evoked only by a fast FM sweep through the cell's TRF, suggesting a preference for fast FM. The third pattern was associated with the off response to pure tones and was composed of several discrete response peaks during slow FM stimulus. These peaks were not predictable from the cell's tonal response but reliably reflected the time when FM swept across specific frequencies. Our A1 samples often exhibited a complex response pattern, combining two or three of the basic patterns above, resulting in a heterogeneous response population. The diversity of FM responses suggests that A1 use multiple mechanisms to fully represent the whole range of FM parameters, including frequency extent, sweep speed, and direction.


2004 ◽  
Vol 7 (9) ◽  
pp. 974-981 ◽  
Author(s):  
Shaowen Bao ◽  
Edward F Chang ◽  
Jennifer Woods ◽  
Michael M Merzenich

2005 ◽  
Vol 94 (4) ◽  
pp. 2970-2975 ◽  
Author(s):  
Rajiv Narayan ◽  
Ayla Ergün ◽  
Kamal Sen

Although auditory cortex is thought to play an important role in processing complex natural sounds such as speech and animal vocalizations, the specific functional roles of cortical receptive fields (RFs) remain unclear. Here, we study the relationship between a behaviorally important function: the discrimination of natural sounds and the structure of cortical RFs. We examine this problem in the model system of songbirds, using a computational approach. First, we constructed model neurons based on the spectral temporal RF (STRF), a widely used description of auditory cortical RFs. We focused on delayed inhibitory STRFs, a class of STRFs experimentally observed in primary auditory cortex (ACx) and its analog in songbirds (field L), which consist of an excitatory subregion and a delayed inhibitory subregion cotuned to a characteristic frequency. We quantified the discrimination of birdsongs by model neurons, examining both the dynamics and temporal resolution of discrimination, using a recently proposed spike distance metric (SDM). We found that single model neurons with delayed inhibitory STRFs can discriminate accurately between songs. Discrimination improves dramatically when the temporal structure of the neural response at fine timescales is considered. When we compared discrimination by model neurons with and without the inhibitory subregion, we found that the presence of the inhibitory subregion can improve discrimination. Finally, we modeled a cortical microcircuit with delayed synaptic inhibition, a candidate mechanism underlying delayed inhibitory STRFs, and showed that blocking inhibition in this model circuit degrades discrimination.


2001 ◽  
Vol 85 (4) ◽  
pp. 1732-1749 ◽  
Author(s):  
Steven W. Cheung ◽  
Purvis H. Bedenbaugh ◽  
Srikantan S. Nagarajan ◽  
Christoph E. Schreiner

The spatial organization of response parameters in squirrel monkey primary auditory cortex (AI) accessible on the temporal gyrus was determined with the excitatory receptive field to pure tone stimuli. Dense, microelectrode mapping of the temporal gyrus in four animals revealed that characteristic frequency (CF) had a smooth, monotonic gradient that systematically changed from lower values (0.5 kHz) in the caudoventral quadrant to higher values (5–6 kHz) in the rostrodorsal quadrant. The extent of AI on the temporal gyrus was ∼4 mm in the rostrocaudal axis and 2–3 mm in the dorsoventral axis. The entire length of isofrequency contours below 6 kHz was accessible for study. Several independent, spatially organized functional response parameters were demonstrated for the squirrel monkey AI. Latency, the asymptotic minimum arrival time for spikes with increasing sound pressure levels at CF, was topographically organized as a monotonic gradient across AI nearly orthogonal to the CF gradient. Rostral AI had longer latencies (range = 4 ms). Threshold and bandwidth co-varied with the CF. Factoring out the contribution of the CF on threshold variance, residual threshold showed a monotonic gradient across AI that had higher values (range = 10 dB) caudally. The orientation of the threshold gradient was significantly different from the CF gradient. CF-corrected bandwidth, residual Q10, was spatially organized in local patches of coherent values whose loci were specific for each monkey. These data support the existence of multiple, overlying receptive field gradients within AI and form the basis to develop a conceptual framework to understand simple and complex sound coding in mammals.


2011 ◽  
Vol 106 (2) ◽  
pp. 1016-1027 ◽  
Author(s):  
Martin Pienkowski ◽  
Jos J. Eggermont

The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.


2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


Sign in / Sign up

Export Citation Format

Share Document