Processing of Modulated Sounds in the Zebra Finch Auditory Midbrain: Responses to Noise, Frequency Sweeps, and Sinusoidal Amplitude Modulations

2005 ◽  
Vol 94 (2) ◽  
pp. 1143-1157 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian auditory midbrain nucleus, the mesencephalicus lateralis, dorsalis (MLd), is the first auditory processing stage in which multiple parallel inputs converge, and it provides the input to the auditory thalamus. We studied the responses of single MLd neurons to four types of modulated sounds: 1) white noise; 2) band-limited noise; 3) frequency modulated (FM) sweeps, and 4) sinusoidally amplitude-modulated tones (SAM) in adult male zebra finches. Responses were compared with the responses of the same neurons to pure tones in terms of temporal response patterns, thresholds, characteristic frequencies, frequency tuning bandwidths, tuning sharpness, and spike rate/intensity relationships. Most neurons responded well to noise. More than one-half of the neurons responded selectively to particular portions of the noise, suggesting that, unlike forebrain neurons, many MLd neurons can encode specific acoustic components of highly modulated sounds such as noise. Selectivity for FM sweep direction was found in only 13% of cells that responded to sweeps. Those cells also showed asymmetric tuning curves, suggesting that asymmetric inhibition plays a role in FM directional selectivity. Responses to SAM showed that MLd neurons code temporal modulation rates using both spike rate and synchronization. Nearly all cells showed low-pass or band-pass filtering properties for SAM. Best modulation frequencies matched the temporal modulations in zebra finch song. Results suggest that auditory midbrain neurons are well suited for encoding a wide range of complex sounds with a high degree of temporal accuracy rather than selectively responding to only some sounds.

2004 ◽  
Vol 91 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian mesencephalicus lateralis, dorsalis (MLd) is the auditory midbrain nucleus in which multiple parallel inputs from lower brain stem converge and through which most auditory information passes to reach the forebrain. Auditory processing in the MLd has not been investigated in songbirds. We studied the tuning properties of single MLd neurons in adult male zebra finches. Pure tones were used to examine tonotopy, temporal response patterns, frequency coding, intensity coding, spike latencies, and duration tuning. Most neurons had no spontaneous activity. The tonotopy of MLd is like that of other birds and mammals; characteristic frequencies (CFs) increase in a dorsal to ventral direction. Four major response patterns were found: 1) onset (49% of cells); 2) primary-like (20%); 3) sustained (19%); and 4) primary-like with notch (12%). CFs ranged between 0.9 and 6.1 kHz, matching the zebra finch hearing range and the power spectrum of song. Tuning curves were generally V-shaped, but complex curves, with multiple peaks or noncontiguous excitatory regions, were observed in 22% of cells. Rate-level functions indicated that 51% of nononset cells showed monotonic relationships between spike rate and sound level. Other cells showed low saturation or nonmonotonic responses. Spike latencies ranged from 4 to 40 ms, measured at CF. Spike latencies generally decreased with increasing sound pressure level (SPL), although paradoxical latency shifts were observed in 16% of units. For onset cells, changes in SPL produced smaller latency changes than for cells showing other response types. Results suggest that auditory midbrain neurons may be particularly suited for processing temporally complex signals with a high degree of precision.


2006 ◽  
Vol 96 (5) ◽  
pp. 2177-2188 ◽  
Author(s):  
Laura M. Hurley

The neuromodulator serotonin has a complex set of effects on the auditory responses of neurons within the inferior colliculus (IC), a midbrain auditory nucleus that integrates a wide range of inputs from auditory and nonauditory sources. To determine whether activation of different types of serotonin receptors is a source of the variability in serotonergic effects, four selective agonists of serotonin receptors in the serotonin (5-HT) 1 and 5-HT2 families were iontophoretically applied to IC neurons, which were monitored for changes in their responses to auditory stimuli. Different agonists had different effects on neural responses. The 5-HT1A agonist had mixed facilitatory and depressive effects, whereas 5-HT1B and 5-HT2C agonists were both largely facilitatory. Different agonists changed threshold and frequency tuning in ways that reflected their effects on spike count. When pairs of agonists were applied sequentially to the same neurons, selective agonists sometimes affected neurons in ways that were similar to serotonin, but not to other selective agonists tested. Different agonists also differentially affected groups of neurons classified by the shapes of their frequency-tuning curves, with serotonin and the 5-HT1 receptors affecting proportionally more non-V-type neurons relative to the other agonists tested. In all, evidence suggests that the diversity of serotonin receptor subtypes in the IC is likely to account for at least some of the variability of the effects of serotonin and that receptor subtypes fulfill specialized roles in auditory processing.


1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


2003 ◽  
Vol 89 (1) ◽  
pp. 472-487 ◽  
Author(s):  
Julie A. Grace ◽  
Noopur Amin ◽  
Nandini C. Singh ◽  
Frédéric E. Theunissen

The selectivity of neurons in the zebra finch auditory forebrain for natural sounds was investigated systematically. The principal auditory forebrain area in songbirds consists of the tonotopically organized field L complex, which, by its location in the auditory processing stream, can be compared with the auditory cortex of mammals. We also recorded from a secondary auditory area, cHV. Field L and cHV are auditory processing stages that are presynaptic to the specialized song system nuclei where auditory neurons show an extremely selective response for the bird's own song, but weak response to almost any other sounds, including conspecific songs. In our study, we found that neurons in field L and cHV had stronger responses to conspecific song than to synthetic sounds that were designed to match the lower order acoustical properties of song, such as their overall power spectra and AM spectra. Such preferential responses to natural sounds cannot be explained by linear frequency tuning or simple nonlinear intensity tuning and requires linear or nonlinear spectro-temporal neuronal transfer functions tuned to the acoustical properties of song. The selectivity for conspecific songs in field L and cHV might reflect an intermediate auditory processing stage for vocalizations that then contributes to the generation of the very specific selectivity for the bird's own song seen in the postsynaptic song system.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9363
Author(s):  
Priscilla Logerot ◽  
Paul F. Smith ◽  
Martin Wild ◽  
M. Fabiana Kubke

In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.


2010 ◽  
Vol 103 (6) ◽  
pp. 3248-3265 ◽  
Author(s):  
David M. Schneider ◽  
Sarah M. N. Woolley

Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.


2001 ◽  
Vol 86 (3) ◽  
pp. 1445-1458 ◽  
Author(s):  
Kamal Sen ◽  
Frédéric E. Theunissen ◽  
Allison J. Doupe

Although understanding the processing of natural sounds is an important goal in auditory neuroscience, relatively little is known about the neural coding of these sounds. Recently we demonstrated that the spectral temporal receptive field (STRF), a description of the stimulus-response function of auditory neurons, could be derived from responses to arbitrary ensembles of complex sounds including vocalizations. In this study, we use this method to investigate the auditory processing of natural sounds in the birdsong system. We obtain neural responses from several regions of the songbird auditory forebrain to a large ensemble of bird songs and use these data to calculate the STRFs, which are the best linear model of the spectral-temporal features of sound to which auditory neurons respond. We find that these neurons respond to a wide variety of features in songs ranging from simple tonal components to more complex spectral-temporal structures such as frequency sweeps and multi-peaked frequency stacks. We quantify spectral and temporal characteristics of these features by extracting several parameters from the STRFs. Moreover, we assess the linearity versus nonlinearity of encoding by quantifying the quality of the predictions of the neural responses to songs obtained using the STRFs. Our results reveal successively complex functional stages of song analysis by neurons in the auditory forebrain. When we map the properties of auditory forebrain neurons, as characterized by the STRF parameters, onto conventional anatomical subdivisions of the auditory forebrain, we find that although some properties are shared across different subregions, the distribution of several parameters is suggestive of hierarchical processing.


1992 ◽  
Vol 68 (5) ◽  
pp. 1760-1774 ◽  
Author(s):  
L. Yang ◽  
G. D. Pollak ◽  
C. Resler

1. The influence of bicuculline on the tuning curves of 65 neurons in the inferior colliculus of the mustache bat was investigated. Single units were recorded with multibarrel electrodes where one barrel contained bicuculline, an antagonist specific for gamma-amino-butyric acid (GABA)A receptors. Fifty-nine tuning curves were recorded from units that were sharply tuned to 60 kHz, the dominant frequency of the bat's orientation call, but six tuning curves were also recorded from units tuned to lower frequencies and whose tuning curves were broader than the 60 kHz cells. Tuning curves were constructed from peristimulus time (PST) histograms obtained over a wide range of frequency-intensity combinations. Thus tuning curves, PST histograms evoked by frequencies within the tuning curve, and rate-level functions at the best frequency were obtained before iontophoresis of bicuculline and compared with the tuning curves and response properties obtained during the administration of bicuculline. 2. Three general types of tuning curves were obtained: 1) open tuning curves that broadened on both the high- and low-frequency sides with increasing sound level; 2) level-tolerant tuning curves in which the width of the tuning curve remained uniformly narrow with increasing sound level; and 3) upper-threshold tuning curves, which did not discharge to high-intensity tone bursts at the best frequency, thereby creating closed or folded tuning curves. 3. One major finding is that GABAergic inhibition plays an important role in sharpening frequency tuning properties of many neurons in the mustache bat inferior colliculus. In response to blocking GABAergic inputs with bicuculline, the tuning curves broadened in 42% of the neurons that were sharply tuned to 60 kHz. The degree of change in most units varied with sound level: tuning curves were least affected, or not affected at all, within 10 dB of threshold and showed progressively greater changes at higher sound levels. These effects were seen in units that had open, level-tolerant, and upper-threshold tuning curves. 4. A second key result is that bicuculline affected rate-level functions and/or temporal discharge patterns in many units. Bicuculline transformed the rate-level functions of 13 cells that originally had nonmonotonic rate level functions, from strongly nonmonotonic into weakly nonmonotonic or monotonic functions. It also changed the temporal discharge patterns in 22 cells, and the changes were often frequency specific.(ABSTRACT TRUNCATED AT 400 WORDS)


2003 ◽  
Vol 89 (3) ◽  
pp. 1603-1622 ◽  
Author(s):  
Siddhartha C. Kadia ◽  
Xiaoqin Wang

We investigated modulations by stimulus components placed outside of the classical receptive field in the primary auditory cortex (A1) of awake marmosets. Two classes of neurons were identified using single tone stimuli: neurons with single-peaked frequency tuning characteristics (147/185, 80%) and neurons with multipeaked frequency tuning characteristics (38/185, 20%), referred to as single- and multipeaked units, respectively. Each class of neurons was further studied using two-tone paradigms in which the frequency, intensity, and timing of the second tone were systematically varied while a unit was driven by the first tone placed at a unit's characteristic frequency (CF) if it was single-peaked or at one of multiple spectral peaks if it was multipeaked. The main findings were: 1) excitatory spectral peaks in the frequency tuning of the multipeaked units were often harmonically related. 2) Multipeaked units showed facilitation in their responses to combinations of two harmonically related tones placed at the spectral peaks of their frequency tuning. The two-tone facilitation was strongest for the simultaneously presented tones. 3) In 76 of 113 single-peaked units studied using the two-tone paradigm, facilitatory and/or inhibitory modulations by distant off-CF tones were observed. This distant inhibition differed from flanking (or side-band) inhibitions near CF. 4) In single-peaked units, the distant off-CF inhibitions were dominated by tones at frequencies that were harmonically related to the CF of a unit, whereas the facilitation by off-CF tones was observed for a wide range of frequencies. And 5) in both single- and multipeaked units, sound levels of two interacting tones determined whether the two tones produced excitation or inhibition. The largest facilitation was achieved by using two tones at their corresponding preferred sound levels. Together, these findings suggest that extracting or rejecting harmonically related components embedded in complex sounds may represent fundamental signal processing properties in different classes of A1 neurons.


1988 ◽  
Vol 33 (12) ◽  
pp. 1103-1103
Author(s):  
No authorship indicated

Sign in / Sign up

Export Citation Format

Share Document