scholarly journals Primary auditory cortex represents the location of sound sources in a cue- invariant manner

2018 ◽  
Author(s):  
Katherine C Wood ◽  
Stephen M Town ◽  
Jennifer K Bizley

AbstractAuditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sources in space remains unknown. We measured spatial receptive fields in animals actively attending to spatial location while they performed a relative localisation task using stimuli that varied in the spatial cues that they provided. Manipulating the availability of binaural and spectral localisation cues had mild effects on the ferret’s performance and little impact on the spatial tuning of neurons in primary auditory cortex (A1). Consistent with a representation of space, a subpopulation of neurons encoded spatial position across localisation cue types. Spatial receptive fields measured in the presence of a competing sound source were sharper than those measured in a single-source configuration. Together these observations suggest that A1 encodes the location of auditory objects as opposed to spatial cue values. We compared our data to predictions generated from two theories about how space is represented in auditory cortex: The two-channel model, where location is encoded by the relative activity in each hemisphere, and the labelled-line model where location is represented by the activity pattern of individual cells. The representation of sound location in A1 was mainly contralateral but peak firing rates were distributed across the hemifield consistent with a labelled line model in each hemisphere representing contralateral space. Comparing reconstructions of sound location from neural activity, we found that a labelled line architecture far outperformed two channel systems. Reconstruction ability increased with increasing channel number, saturating at around 20 channels.Significance statementOur perception of a sound scene is one of distinct sound sources each of which can be localised, yet auditory space must be computed from sound location cues that arise principally by comparing the sound at the two ears. Here we ask: (1) do individual neurons in auditory cortex represent space, or sound localisation cues? (2) How is neural activity ‘read out’ for spatial perception? We recorded from auditory cortex in ferrets performing a localisation task and describe a subpopulation of neurons that represent space across localisation cues. Our data are consistent with auditory space being read out using the pattern of activity across neurons (a labelled line) rather than by averaging activity within each hemisphere (a two-channel model).

2001 ◽  
Vol 86 (2) ◽  
pp. 1043-1046 ◽  
Author(s):  
Thomas D. Mrsic-Flogel ◽  
Andrew J. King ◽  
Rick L. Jenison ◽  
Jan W. H. Schnupp

The localization of sounds in space is based on spatial cues that arise from the acoustical properties of the head and external ears. Individual differences in localization cue values result from variability in the shape and dimensions of these structures. We have mapped spatial response fields of high-frequency neurons in ferret primary auditory cortex using virtual sound sources based either on the animal's own ears or on the ears of other subjects. For 73% of units, the response fields measured using the animals' own ears differed significantly in shape and/or position from those obtained using spatial cues from another ferret. The observed changes correlated with individual differences in the acoustics. These data are consistent with previous reports showing that humans localize less accurately when listening to virtual sounds from other individuals. Together these findings support the notion that neural mechanisms underlying auditory space perception are calibrated by experience to the properties of the individual.


2005 ◽  
Vol 93 (1) ◽  
pp. 378-392 ◽  
Author(s):  
Masahiko Tomita ◽  
Jos J. Eggermont

Recordings were made from the right primary auditory cortex in 17 adult cats using two eight-electrode arrays. We recorded the neural activity under spontaneous firing conditions and during random, multi-frequency stimulation, at 65 dB SPL, from the same units. Multiple single-unit (MSU) recordings (281) were stationary through 900 s of silence and during 900 s of stimulation. The cross-correlograms of 545 MSU pairs with peak lag times within 10 ms from zero lag time were analyzed. Stimulation reduced the correlation in background activity, and as a result, the signal-to-noise ratio of correlated activity in response to the stimulus was enhanced. Reconstructed spectro-temporal receptive fields (STRFs) for coincident spikes showed larger STRF overlaps, suggesting that coincident neural activity serves to sharpen the resolution in the spectro-temporal domain. The cross-correlation for spikes contributing to the STRF depended much stronger on the STRF overlap than the cross-correlation during either silence or for spikes that did not contribute to the STRF (OUT-STRF). Compared with that for firings during silence, the cross-correlation for the OUT-STRF spikes was much reduced despite the unchanged firing rate. This suggests that stimulation breaks up the large neural assembly that exists during long periods of silence into a stimulus related one and maybe several others. As a result, the OUT-STRF spikes of the unit pairs, now likely distributed across several assemblies, are less correlated than during long periods of silence. Thus the ongoing network activity is significantly different from that during stimulation and changes afterng arousal during stimulation.


2019 ◽  
Author(s):  
Jesyin Lai ◽  
Stephen V. David

ABSTRACTChronic vagus nerve stimulation (VNS) can facilitate learning of sensory and motor behaviors. VNS is believed to trigger release of neuromodulators, including norepinephrine and acetylcholine, which can mediate cortical plasticity associated with learning. Most previous work has studied effects of VNS over many days, and less is known about how acute VNS influences neural coding and behavior over the shorter term. To explore this question, we measured effects of VNS on learning of an auditory discrimination over 1-2 days. Ferrets implanted with cuff electrodes on the vagus nerve were trained by classical conditioning on a tone frequency-reward association. One tone was associated with reward while another tone, was not. The frequencies and reward associations of the tones were changed every two days, requiring learning of a new relationship. When the tones (both rewarded and non-rewarded) were paired with VNS, rates of learning increased on the first day following a change in reward association. To examine VNS effects on auditory coding, we recorded single- and multi-unit neural activity in primary auditory cortex (A1) of passively listening animals following brief periods of VNS (20 trials/session) paired with tones. Because afferent VNS induces changes in pupil size associated with fluctuations in neuromodulation, we also measured pupil during recordings. After pairing VNS with a neuron’s best-frequency (BF) tone, responses in a subpopulation of neurons were reduced. Pairing with an off-BF tone or performing VNS during the inter-trial interval had no effect on responses. We separated the change in A1 activity into two components, one that could be predicted by fluctuations in pupil and one that persisted after VNS and was not accounted for by pupil. The BF-specific reduction in neural responses remained, even after regressing out changes that could be explained by pupil. In addition, the size of VNS-mediated changes in pupil predicted the magnitude of persistent changes in the neural response. This interaction suggests that changes in neuromodulation associated with arousal gate the long-term effects of VNS on neural activity. Taken together, these results support a role for VNS in auditory learning and help establish VNS as a tool to facilitate neural plasticity.


2005 ◽  
Vol 94 (4) ◽  
pp. 2970-2975 ◽  
Author(s):  
Rajiv Narayan ◽  
Ayla Ergün ◽  
Kamal Sen

Although auditory cortex is thought to play an important role in processing complex natural sounds such as speech and animal vocalizations, the specific functional roles of cortical receptive fields (RFs) remain unclear. Here, we study the relationship between a behaviorally important function: the discrimination of natural sounds and the structure of cortical RFs. We examine this problem in the model system of songbirds, using a computational approach. First, we constructed model neurons based on the spectral temporal RF (STRF), a widely used description of auditory cortical RFs. We focused on delayed inhibitory STRFs, a class of STRFs experimentally observed in primary auditory cortex (ACx) and its analog in songbirds (field L), which consist of an excitatory subregion and a delayed inhibitory subregion cotuned to a characteristic frequency. We quantified the discrimination of birdsongs by model neurons, examining both the dynamics and temporal resolution of discrimination, using a recently proposed spike distance metric (SDM). We found that single model neurons with delayed inhibitory STRFs can discriminate accurately between songs. Discrimination improves dramatically when the temporal structure of the neural response at fine timescales is considered. When we compared discrimination by model neurons with and without the inhibitory subregion, we found that the presence of the inhibitory subregion can improve discrimination. Finally, we modeled a cortical microcircuit with delayed synaptic inhibition, a candidate mechanism underlying delayed inhibitory STRFs, and showed that blocking inhibition in this model circuit degrades discrimination.


2006 ◽  
Vol 96 (2) ◽  
pp. 746-764 ◽  
Author(s):  
Jos J. Eggermont

Spiking activity was recorded from cat auditory cortex using multi-electrode arrays. Cross-correlograms were calculated for spikes recorded on separate microelectrodes. The pair-wise cross-correlation matrix was constructed for the peak values of the correlograms. Hierarchical clustering was performed on the cross-correlation matrix for six stimulus conditions. These were silence, three multi-tone stimulus ensembles with different spectral densities, low-pass amplitude-modulated noise, and Poisson-distributed click trains that each lasted 15 min. The resulting neuron clusters reflect patches in cortex of up to several mm2 in size that expand and contract in response to different stimuli. Cluster positions and size were very similar for spontaneous activity and multi-tone stimulus-evoked activity but differed between those conditions and the noise and click stimuli. Cluster size was significantly larger in posterior auditory field (PAF) compared with primary auditory cortex (AI), whereas the fraction of common spikes (within a 10-ms window) across all electrode activity participating in a cluster was significantly higher in AI compared with PAF. Clusters crossed area boundaries in <5% of the cases were simultaneous recording were made in AI and PAF. Clusters are therefore similar to but not synonymous with the traditional view of neural assemblies. Common-spike spectrotemporal receptive fields (STRFs) were obtained for common-spike activity and all-spike activity within a cluster. Common-spike STRFs had higher signal-to-noise ratio than all-spike STRFs and showed generally spectral and temporal sharpening. The coincident and noncoincident output of the clusters could potentially act in parallel and may serve different modes of stimulus coding.


2015 ◽  
Vol 113 (2) ◽  
pp. 475-486
Author(s):  
Melanie A. Kok ◽  
Daniel Stolzberg ◽  
Trecia A. Brown ◽  
Stephen G. Lomber

Current models of hierarchical processing in auditory cortex have been based principally on anatomical connectivity while functional interactions between individual regions have remained largely unexplored. Previous cortical deactivation studies in the cat have addressed functional reciprocal connectivity between primary auditory cortex (A1) and other hierarchically lower level fields. The present study sought to assess the functional contribution of inputs along multiple stages of the current hierarchical model to a higher order area, the dorsal zone (DZ) of auditory cortex, in the anaesthetized cat. Cryoloops were placed over A1 and posterior auditory field (PAF). Multiunit neuronal responses to noise burst and tonal stimuli were recorded in DZ during cortical deactivation of each field individually and in concert. Deactivation of A1 suppressed peak neuronal responses in DZ regardless of stimulus and resulted in increased minimum thresholds and reduced absolute bandwidths for tone frequency receptive fields in DZ. PAF deactivation had less robust effects on DZ firing rates and receptive fields compared with A1 deactivation, and combined A1/PAF cooling was largely driven by the effects of A1 deactivation at the population level. These results provide physiological support for the current anatomically based model of both serial and parallel processing schemes in auditory cortical hierarchical organization.


2002 ◽  
Vol 174 (1-2) ◽  
pp. 19-31 ◽  
Author(s):  
André Rupp ◽  
Stefan Uppenkamp ◽  
Alexander Gutschalk ◽  
Roland Beucker ◽  
Roy D Patterson ◽  
...  

1996 ◽  
Vol 16 (14) ◽  
pp. 4420-4437 ◽  
Author(s):  
John F. Brugge ◽  
Richard A. Reale ◽  
Joseph E. Hind

2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


Sign in / Sign up

Export Citation Format

Share Document