Spatial Distribution of Responses to Simple and Complex Sounds in the Primary Auditory Cortex

1998 ◽  
Vol 3 (2-3) ◽  
pp. 104-122 ◽  
Author(s):  
Christoph E. Schreiner
1990 ◽  
Vol 64 (3) ◽  
pp. 888-902 ◽  
Author(s):  
R. Rajan ◽  
L. M. Aitkin ◽  
D. R. Irvine

1. The organization of azimuthal sensitivity of units across the dorsoventral extent of primary auditory cortex (AI) was studied in electrode penetrations made along frequency-band strips of AI. Azimuthal sensitivity for each unit was represented by a mean azimuth function (MF) calculated from all azimuth functions obtained to characteristic frequency (CF) stimuli at intensities 20 dB or more greater than threshold. MFs were classified as contrafield, ipsi-field, central-field, omnidirectional, or multipeaked, according to the criteria established in the companion paper (Rajan et al. 1990). 2. The spatial distribution of three types of MFs was not random across frequency-band strips: for contra-field, ipsi-field, and central-field MFs there was a significant tendency for clustering of functions of the same type in sequentially encountered units. Occasionally, repeated clusters of a particular MF type could be found along a frequency-band strip. In contrast, the spatial distribution of omnidirectional MFs along frequency-band strips appeared to be random. 3. Apart from the clustering of MF types, there were also regions along a frequency-band strip in which there were rapid changes in the type of MF encountered in units isolated over short distances. Most often such changes took the form of irregular, rapid juxtapositions of MF types. Less frequently such changes appeared to show more systematic changes from one type of MF to another type. In contrast to these changes in azimuthal sensitivity seen in electrode penetrations oblique to the cortical surface, much less change in azimuthal sensitivity was seen in the form of azimuthal sensitivity displayed by successively isolated units in penetrations made normal to the cortical surface. 4. To determine whether some significant feature or features of azimuthal sensitivity shifted in a more continuous and/or systematic manner along frequency-band strips, azimuthal sensitivity was quantified in terms of the peak-response azimuth (PRA) of the MFs of successive units and of the azimuthal range over which the peaks occurred in the individual azimuth functions contributing to each MF (the peak-response range). In different experiments shifts in these measures of the peaks in successively isolated units along a frequency-band strip were found generally to fall into one of four categories: 1) shifts across the entire frontal hemifield; 2) clustering in the contralateral quadrant; 3) clustering in the ipsilateral quadrant; and 4) clustering about the midline. In two cases more than one of these four patterns were found along a frequency-band strip.(ABSTRACT TRUNCATED AT 400 WORDS)


2011 ◽  
Vol 106 (2) ◽  
pp. 1016-1027 ◽  
Author(s):  
Martin Pienkowski ◽  
Jos J. Eggermont

The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.


2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


Author(s):  
Israel Nelken

Understanding the principles by which sensory systems represent natural stimuli is one of the holy grails of neuroscience. In the auditory system, the study of the coding of natural sounds has a particular prominence. Indeed, the relationships between neural responses to simple stimuli (usually pure tone bursts)—often used to characterize auditory neurons—and complex sounds (in particular natural sounds) may be complex. Many different classes of natural sounds have been used to study the auditory system. Sound families that researchers have used to good effect in this endeavor include human speech, species-specific vocalizations, an “acoustic biotope” selected in one way or another, and sets of artificial sounds that mimic important features of natural sounds. Peripheral and brainstem representations of natural sounds are relatively well understood. The properties of the peripheral auditory system play a dominant role, and further processing occurs mostly within the frequency channels determined by these properties. At the level of the inferior colliculus, the highest brainstem station, representational complexity increases substantially due to the convergence of multiple processing streams. Undoubtedly, the most explored part of the auditory system, in term of responses to natural sounds, is the primary auditory cortex. In spite of over 50 years of research, there is still no commonly accepted view of the nature of the population code for natural sounds in the auditory cortex. Neurons in the auditory cortex are believed by some to be primarily linear spectro-temporal filters, by others to respond to conjunctions of important sound features, or even to encode perceptual concepts such as “auditory objects.” Whatever the exact mechanism is, many studies consistently report a substantial increase in the variability of the response patterns of cortical neurons to natural sounds. The generation of such variation may be the main contribution of auditory cortex to the coding of natural sounds.


2018 ◽  
Vol 29 (7) ◽  
pp. 2998-3009 ◽  
Author(s):  
Haifu Li ◽  
Feixue Liang ◽  
Wen Zhong ◽  
Linqing Yan ◽  
Lucas Mesik ◽  
...  

Abstract Spatial size tuning in the visual cortex has been considered as an important neuronal functional property for sensory perception. However, an analogous mechanism in the auditory system has remained controversial. In the present study, cell-attached recordings in the primary auditory cortex (A1) of awake mice revealed that excitatory neurons can be categorized into three types according to their bandwidth tuning profiles in response to band-passed noise (BPN) stimuli: nonmonotonic (NM), flat, and monotonic, with the latter two considered as non-tuned for bandwidth. The prevalence of bandwidth-tuned (i.e., NM) neurons increases significantly from layer 4 to layer 2/3. With sequential cell-attached and whole-cell voltage-clamp recordings from the same neurons, we found that the bandwidth preference of excitatory neurons is largely determined by the excitatory synaptic input they receive, and that the bandwidth selectivity is further enhanced by flatly tuned inhibition observed in all cells. The latter can be attributed at least partially to the flat tuning of parvalbumin inhibitory neurons. The tuning of auditory cortical neurons for bandwidth of BPN may contribute to the processing of complex sounds.


2012 ◽  
Vol 24 (9) ◽  
pp. 1896-1907 ◽  
Author(s):  
I-Hui Hsieh ◽  
Paul Fillmore ◽  
Feng Rong ◽  
Gregory Hickok ◽  
Kourosh Saberi

Frequency modulation (FM) is an acoustic feature of nearly all complex sounds. Directional FM sweeps are especially pervasive in speech, music, animal vocalizations, and other natural sounds. Although the existence of FM-selective cells in the auditory cortex of animals has been documented, evidence in humans remains equivocal. Here we used multivariate pattern analysis to identify cortical selectivity for direction of a multitone FM sweep. This method distinguishes one pattern of neural activity from another within the same ROI, even when overall level of activity is similar, allowing for direct identification of FM-specialized networks. Standard contrast analysis showed that despite robust activity in auditory cortex, no clusters of activity were associated with up versus down sweeps. Multivariate pattern analysis classification, however, identified two brain regions as selective for FM direction, the right primary auditory cortex on the supratemporal plane and the left anterior region of the superior temporal gyrus. These findings are the first to directly demonstrate existence of FM direction selectivity in the human auditory cortex.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Destinee A. Aponte ◽  
Gregory Handy ◽  
Amber M. Kline ◽  
Hiroaki Tsukano ◽  
Brent Doiron ◽  
...  

AbstractDetecting the direction of frequency modulation (FM) is essential for vocal communication in both animals and humans. Direction-selective firing of neurons in the primary auditory cortex (A1) has been classically attributed to temporal offsets between feedforward excitatory and inhibitory inputs. However, it remains unclear how cortical recurrent circuitry contributes to this computation. Here, we used two-photon calcium imaging and whole-cell recordings in awake mice to demonstrate that direction selectivity is not caused by temporal offsets between synaptic currents, but by an asymmetry in total synaptic charge between preferred and non-preferred directions. Inactivation of cortical somatostatin-expressing interneurons (SOM cells) reduced direction selectivity, revealing its cortical contribution. Our theoretical models showed that charge asymmetry arises due to broad spatial topography of SOM cell-mediated inhibition which regulates signal amplification in strongly recurrent circuitry. Together, our findings reveal a major contribution of recurrent network dynamics in shaping cortical tuning to behaviorally relevant complex sounds.


Author(s):  
Joshua D Downer ◽  
James Bigelow ◽  
Melissa Runfeldt ◽  
Brian James Malone

Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.


1990 ◽  
Vol 64 (5) ◽  
pp. 1442-1459 ◽  
Author(s):  
C. E. Schreiner ◽  
J. R. Mendelson

1. Neuronal responses to tones and transient stimuli were mapped with microelectrodes in the primary auditory cortex (AI) of barbiturate anesthetized cats. Most of the dorsoventral extent of AI was mapped with multiple-unit recordings in the high-frequency domain (between 5.8 and 26.3 kHz) of all six studied cases. The spatial distributions of 1) sharpness of tuning measured with pure tones and 2) response magnitudes to a broadband transient were determined in each of three intensively studied cases. 2. The sharpness of tuning of integrated cluster responses was defined 10 dB above threshold (Q10 dB, integrated excitatory bandwidth). The spatial reconstructions revealed a frequency-independent maximum located near the center of the dorsoventral extent of AI. The sharpness of tuning gradually decreased toward the dorsal and ventral border of AI in all three cases. 3. The sharpness of tuning 40 dB above response threshold was also analyzed (Q40 dB). The Q40 dB values were less than one-half of the corresponding Q10 dB value. The spatial distribution showed a maximum in the center of AI, similar to the Q10 dB distribution. In two out of three cases, restricted additional maxima were recorded dorsal to the main maximum. Overall, Q10 dB and Q40 dB were only moderately correlated, indicating that the integrated excitatory bandwidth at higher stimulus levels can be influenced by additional mechanisms that are not active at lower levels. 4. The magnitude of excitatory responses to a broadband transient (frequency-step response) was determined. The normalized response magnitude varied between less than 1% and up to 100% relative to a characteristic frequency (CF) tone response. The step-response magnitude showed a systematic spatial distribution. An area dorsal to the Q10 dB maximum consistently showed the largest response magnitude surrounded by areas of lower responsivity. A second spatially more restricted maximum was recorded in the ventral-third of each map. Areas with high-transient responsiveness coincided with areas of broad integrated excitatory bandwidth at comparable stimulus levels. 5. The distribution of excitation produced by narrowband and broadband signals suggest that there exists a clear functional organization in the isofrequency domain of AI that is orthogonal to the main cochleotopic organization of the AI. Systematic spatial variations of the integrated excitatory bandwidth reflect underlying cortical processing capacities that may contribute to a parallel analysis of spectral complexity, e.g., spectral shape and contrast, at any given frequency.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document