spectrotemporal receptive field
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 1)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Vol 17 (2) ◽  
pp. e1008768
Author(s):  
Christof Fehrman ◽  
Tyler D. Robbins ◽  
C. Daniel Meliza

Neurons exhibit diverse intrinsic dynamics, which govern how they integrate synaptic inputs to produce spikes. Intrinsic dynamics are often plastic during development and learning, but the effects of these changes on stimulus encoding properties are not well known. To examine this relationship, we simulated auditory responses to zebra finch song using a linear-dynamical cascade model, which combines a linear spectrotemporal receptive field with a dynamical, conductance-based neuron model, then used generalized linear models to estimate encoding properties from the resulting spike trains. We focused on the effects of a low-threshold potassium current (KLT) that is present in a subset of cells in the zebra finch caudal mesopallium and is affected by early auditory experience. We found that KLT affects both spike adaptation and the temporal filtering properties of the receptive field. The direction of the effects depended on the temporal modulation tuning of the linear (input) stage of the cascade model, indicating a strongly nonlinear relationship. These results suggest that small changes in intrinsic dynamics in tandem with differences in synaptic connectivity can have dramatic effects on the tuning of auditory neurons.


2020 ◽  
Author(s):  
Christof Fehrman ◽  
Tyler D Robbins ◽  
C Daniel Meliza

Neurons exhibit diverse intrinsic dynamics, which govern how they integrate synaptic inputs to produce spikes. Intrinsic dynamics are often plastic during development and learning, but the effects of these changes on stimulus encoding properties are not well known. To examine this relationship, we simulated auditory responses to zebra finch song using a linear-dynamical cascade model, which combines a linear spectrotemporal receptive field with a dynamical, conductance-based neuron model, then used generalized linear models to estimate encoding properties from the resulting spike trains. We focused on the effects of a low-threshold potassium current (KLT) that is present in a subset of cells in the zebra finch caudal mesopallium and is affected by early auditory experience. We found that KLT affects both spike adaptation and the temporal filtering properties of the receptive field. Interestingly, the direction of the effects depended on the temporal modulation tuning of the linear (input) stage of the cascade model, indicating a strongly nonlinear relationship. These results suggest that small changes in intrinsic dynamics in tandem with differences in synaptic connectivity can have dramatic effects on the tuning of auditory neurons.


2020 ◽  
Vol 7 (3) ◽  
pp. 191194
Author(s):  
Vani G. Rajendran ◽  
Nicol S. Harper ◽  
Jan W. H. Schnupp

Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This ‘neural emphasis’ distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the ‘bottom-up’ processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.


2018 ◽  
Author(s):  
Vani G. Rajendran ◽  
Nicol S. Harper ◽  
Jan W. H. Schnupp

AbstractMusical beat perception is widely regarded as a high-level ability involving widespread coordination across brain areas, but how low-level auditory processing must necessarily shape these dynamics, and therefore perception, remains unexplored. Previous cross-species work suggested that beat perception in simple rhythmic noise bursts is shaped by neural transients in the ascending sensory pathway. Here, we found that low-level processes even substantially explain the emergence of beat in real music. Firing rates in the rat auditory cortex in response to twenty musical excerpts were on average higher on the beat than off the beat tapped by human listeners. This “neural emphasis” distinguished the perceived beat from alternative interpretations, was predictive of the degree of consensus across listeners, and was accounted for by a spectrotemporal receptive field model. These findings indicate that low-level auditory processing may have a stronger influence on the location and clarity of the beat in music than previously thought.


2017 ◽  
Author(s):  
Stephanie Martin ◽  
Christian Mikutta ◽  
Matthew K. Leonard ◽  
Dylan Hungate ◽  
Stefan Koelsch ◽  
...  

AbstractIt remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in two conditions. First, the participant played two piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. For both conditions, we built encoding models to predict high gamma neural activity (70-150Hz) from the spectrogram representation of the recorded sound. We found robust similarities between perception and imagery – in frequency and temporal tuning properties in auditory areas.AbbreviationsECoGelectrocorticographyHGhigh gammaIFGinferior frontal gyrusMTGmiddle temporal gyrusPost-CGpost-central gyrusPre-CGpre-central gyrusSMGsupramarginal gyrusSTGsuperior temporal gyrusSTRFspectrotemporal receptive field


2016 ◽  
Vol 64 (8) ◽  
pp. 2026-2039 ◽  
Author(s):  
Alireza Sheikhattar ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma ◽  
Behtash Babadi

2016 ◽  
Vol 28 (2) ◽  
pp. 327-353 ◽  
Author(s):  
Johan Westö ◽  
Patrick J. C. May ◽  
Hannu Tiitinen

Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.


2012 ◽  
Vol 107 (12) ◽  
pp. 3296-3307 ◽  
Author(s):  
Nadja Schinkel-Bielefeld ◽  
Stephen V. David ◽  
Shihab A. Shamma ◽  
Daniel A. Butts

Intracellular studies have revealed the importance of cotuned excitatory and inhibitory inputs to neurons in auditory cortex, but typical spectrotemporal receptive field models of neuronal processing cannot account for this overlapping tuning. Here, we apply a new nonlinear modeling framework to extracellular data recorded from primary auditory cortex (A1) that enables us to explore how the interplay of excitation and inhibition contributes to the processing of complex natural sounds. The resulting description produces more accurate predictions of observed spike trains than the linear spectrotemporal model, and the properties of excitation and inhibition inferred by the model are furthermore consistent with previous intracellular observations. It can also describe several nonlinear properties of A1 that are not captured by linear models, including intensity tuning and selectivity to sound onsets and offsets. These results thus offer a broader picture of the computational role of excitation and inhibition in A1 and support the hypothesis that their interactions play an important role in the processing of natural auditory stimuli.


2012 ◽  
Vol 107 (8) ◽  
pp. 2185-2201 ◽  
Author(s):  
Jonathan N. Raksin ◽  
Christopher M. Glaze ◽  
Sarah Smith ◽  
Marc F. Schmidt

Motor-related forebrain areas in higher vertebrates also show responses to passively presented sensory stimuli. However, sensory tuning properties in these areas, especially during wakefulness, and their relation to perception, are poorly understood. In the avian song system, HVC (proper name) is a vocal-motor structure with auditory responses well defined under anesthesia but poorly characterized during wakefulness. We used a large set of stimuli including the bird's own song (BOS) and many conspecific songs (CON) to characterize auditory tuning properties in putative interneurons (HVCIN) during wakefulness. Our findings suggest that HVC contains a diversity of responses that vary in overall excitability to auditory stimuli, as well as bias in spike rate increases to BOS over CON. We used statistical tests to classify cells in order to further probe auditory responses, yielding one-third of neurons that were either unresponsive or suppressed and two-thirds with excitatory responses to one or more stimuli. A subset of excitatory neurons were tuned exclusively to BOS and showed very low linearity as measured by spectrotemporal receptive field analysis (STRF). The remaining excitatory neurons responded well to CON stimuli, although many cells still expressed a bias toward BOS. These findings suggest the concurrent presence of a nonlinear and a linear component to responses in HVC, even within the same neuron. These characteristics are consistent with perceptual deficits in distinguishing BOS from CON stimuli following lesions of HVC and other song nuclei and suggest mirror neuronlike qualities in which “self” (here BOS) is used as a referent to judge “other” (here CON).


2010 ◽  
Vol 103 (3) ◽  
pp. 1195-1208 ◽  
Author(s):  
C. Daniel Meliza ◽  
Zhiyi Chi ◽  
Daniel Margoliash

The functional organization giving rise to stimulus selectivity in higher-order auditory neurons remains under active study. We explored the selectivity for motifs, spectrotemporally distinct perceptual units in starling song, recording the responses of 96 caudomedial mesopallium (CMM) neurons in European starlings ( Sturnus vulgaris) under awake-restrained and urethane-anesthetized conditions. A subset of neurons was highly selective between motifs. Selectivity was correlated with low spontaneous firing rates and high spike timing precision, and all but one of the selective neurons had similar spike waveforms. Neurons were further tested with stimuli in which the notes comprising the motifs were manipulated. Responses to most of the isolated notes were similar in amplitude, duration, and temporal pattern to the responses elicited by those notes in the context of the motif. For these neurons, we could accurately predict the responses to motifs from the sum of the responses to notes. Some notes were suppressed by the motif context, such that removing other notes from motifs unmasked additional excitation. Models of linear summation of note responses consistently outperformed spectrotemporal receptive field models in predicting responses to song stimuli. Tests with randomized sequences of notes confirmed the predictive power of these models. Whole notes gave better predictions than did note fragments. Thus in CMM, auditory objects (motifs) can be represented by a linear combination of excitation and suppression elicited by the note components of the object. We hypothesize that the receptive fields arise from selective convergence by inputs responding to specific spectrotemporal features of starling notes.


Sign in / Sign up

Export Citation Format

Share Document