scholarly journals Discrimination of Communication Vocalizations by Single Neurons and Groups of Neurons in the Auditory Midbrain

2010 ◽  
Vol 103 (6) ◽  
pp. 3248-3265 ◽  
Author(s):  
David M. Schneider ◽  
Sarah M. N. Woolley

Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.

2020 ◽  
Vol 124 (4) ◽  
pp. 1165-1182
Author(s):  
Hariprakash Haragopal ◽  
Ryan Dorkoski ◽  
Austin R. Pollard ◽  
Gareth A. Whaley ◽  
Timothy R. Wohl ◽  
...  

Sensorineural hearing loss compromises perceptual abilities that arise from hearing with two ears, yet its effects on binaural aspects of neural responses are largely unknown. We found that, following severe hearing loss because of acoustic trauma, auditory midbrain neurons specifically lost the ability to encode time differences between the arrival of a broadband noise stimulus to the two ears, whereas the encoding of sound level differences between the two ears remained uncompromised.


2020 ◽  
Vol 123 (5) ◽  
pp. 1791-1807 ◽  
Author(s):  
Ryan Dorkoski ◽  
Kenneth E. Hancock ◽  
Gareth A. Whaley ◽  
Timothy R. Wohl ◽  
Noelle C. Stroud ◽  
...  

A “division of labor” has previously been assumed in which the directions of low- and high-frequency sound sources are thought to be encoded by neurons preferentially sensitive to low and high frequencies, respectively. Contrary to this, we found that auditory midbrain neurons encode the directions of both low- and high-frequency sounds regardless of their preferred frequencies. Neural responses were shaped by different sound localization cues depending on the stimulus spectrum—even within the same neuron.


2005 ◽  
Vol 94 (2) ◽  
pp. 1143-1157 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian auditory midbrain nucleus, the mesencephalicus lateralis, dorsalis (MLd), is the first auditory processing stage in which multiple parallel inputs converge, and it provides the input to the auditory thalamus. We studied the responses of single MLd neurons to four types of modulated sounds: 1) white noise; 2) band-limited noise; 3) frequency modulated (FM) sweeps, and 4) sinusoidally amplitude-modulated tones (SAM) in adult male zebra finches. Responses were compared with the responses of the same neurons to pure tones in terms of temporal response patterns, thresholds, characteristic frequencies, frequency tuning bandwidths, tuning sharpness, and spike rate/intensity relationships. Most neurons responded well to noise. More than one-half of the neurons responded selectively to particular portions of the noise, suggesting that, unlike forebrain neurons, many MLd neurons can encode specific acoustic components of highly modulated sounds such as noise. Selectivity for FM sweep direction was found in only 13% of cells that responded to sweeps. Those cells also showed asymmetric tuning curves, suggesting that asymmetric inhibition plays a role in FM directional selectivity. Responses to SAM showed that MLd neurons code temporal modulation rates using both spike rate and synchronization. Nearly all cells showed low-pass or band-pass filtering properties for SAM. Best modulation frequencies matched the temporal modulations in zebra finch song. Results suggest that auditory midbrain neurons are well suited for encoding a wide range of complex sounds with a high degree of temporal accuracy rather than selectively responding to only some sounds.


Biometrics ◽  
1978 ◽  
Vol 34 (3) ◽  
pp. 525
Author(s):  
A. G. Hawkes ◽  
G. Sampath ◽  
S. K. Spinivasan

2010 ◽  
Vol 104 (2) ◽  
pp. 784-798 ◽  
Author(s):  
Noopur Amin ◽  
Patrick Gill ◽  
Frédéric E. Theunissen

We estimated the spectrotemporal receptive fields of neurons in the songbird auditory thalamus, nucleus ovoidalis, and compared the neural representation of complex sounds in the auditory thalamus to those found in the upstream auditory midbrain nucleus, mesencephalicus lateralis dorsalis (MLd), and the downstream auditory pallial region, field L. Our data refute the idea that the primary sensory thalamus acts as a simple, relay nucleus: we find that the auditory thalamic receptive fields obtained in response to song are more complex than the ones found in the midbrain. Moreover, we find that linear tuning diversity and complexity in ovoidalis (Ov) are closer to those found in field L than in MLd. We also find prevalent tuning to intermediate spectral and temporal modulations, a feature that is unique to Ov. Thus even a feed-forward model of the sensory processing chain, where neural responses in the sensory thalamus reveals intermediate response properties between those in the sensory periphery and those in the primary sensory cortex, is inadequate in describing the tuning found in Ov. Based on these results, we believe that the auditory thalamic circuitry plays an important role in generating novel complex representations for specific features found in natural sounds.


1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


1990 ◽  
Vol 47 (3) ◽  
pp. 235-256 ◽  
Author(s):  
Willem J. Melssen ◽  
Willem J.M. Epping ◽  
Ivo H.M. van Stokkum

2004 ◽  
Vol 92 (2) ◽  
pp. 959-976 ◽  
Author(s):  
Renaud Jolivet ◽  
Timothy J. Lewis ◽  
Wulfram Gerstner

We demonstrate that single-variable integrate-and-fire models can quantitatively capture the dynamics of a physiologically detailed model for fast-spiking cortical neurons. Through a systematic set of approximations, we reduce the conductance-based model to 2 variants of integrate-and-fire models. In the first variant (nonlinear integrate-and-fire model), parameters depend on the instantaneous membrane potential, whereas in the second variant, they depend on the time elapsed since the last spike [Spike Response Model (SRM)]. The direct reduction links features of the simple models to biophysical features of the full conductance-based model. To quantitatively test the predictive power of the SRM and of the nonlinear integrate-and-fire model, we compare spike trains in the simple models to those in the full conductance-based model when the models are subjected to identical randomly fluctuating input. For random current input, the simple models reproduce 70–80 percent of the spikes in the full model (with temporal precision of ±2 ms) over a wide range of firing frequencies. For random conductance injection, up to 73 percent of spikes are coincident. We also present a technique for numerically optimizing parameters in the SRM and the nonlinear integrate-and-fire model based on spike trains in the full conductance-based model. This technique can be used to tune simple models to reproduce spike trains of real neurons.


2015 ◽  
Vol 93 (6) ◽  
pp. 964-972 ◽  
Author(s):  
Maria Ll. Valero ◽  
Elena Caminos ◽  
Jose M. Juiz ◽  
Juan R. Martinez-Galan

2011 ◽  
Vol 106 (2) ◽  
pp. 500-514 ◽  
Author(s):  
Joseph W. Schumacher ◽  
David M. Schneider ◽  
Sarah M. N. Woolley

The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals.


Sign in / Sign up

Export Citation Format

Share Document