scholarly journals Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training

2015 ◽  
Vol 113 (5) ◽  
pp. 1480-1492 ◽  
Author(s):  
Brittany A. Bell ◽  
Mimi L. Phan ◽  
David S. Vicario

How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions.

2017 ◽  
Vol 117 (3) ◽  
pp. 1266-1280 ◽  
Author(s):  
Efe Soyman ◽  
David S. Vicario

Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches ( Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds.


2012 ◽  
Vol 107 (6) ◽  
pp. 1621-1631 ◽  
Author(s):  
L. Remage-Healey ◽  
S. M. Dong ◽  
A. Chao ◽  
B. A. Schlinger

Recent evidence shows that brain-derived steroids such as estrogens (“neuroestrogens”) are controlled in a manner very similar to traditional neurotransmitters. The advent of in vivo microdialysis for steroids in songbirds has provided new information about the spatial and temporal dynamics of neuroestrogen changes in a region of the auditory cortex, the caudomedial nidopallium (NCM). Here, experiments using in vivo microdialysis demonstrate that neuroestradiol (E2) fluctuations occur within the auditory NCM during presentation of naturalistic auditory and visual stimuli in males but only to the presentation of auditory stimuli in females. These changes are acute (within 30 min) and appear to be specific to the NCM, because similar treatments elicit no changes in E2 in a nearby mesopallial region or in circulating plasma. Further experiments coupling in vivo steroid microdialysis with extracellular recordings in NCM show that neuroestrogens rapidly boost auditory responses to song stimuli in females, similar to recent observations in males. We also find that the rapid actions of estradiol on auditory responses are fully mimicked by the cell membrane-impermeable estrogen biotinylestradiol, consistent with acute estrogen actions at the neuronal membrane. Thus we conclude that local and acute E2 flux is regulated by convergent multimodal sensory input, and that this regulation appears to be sex-specific. Second, rapid changes in local E2 levels in NCM have consequences for the modulation of auditory processing in females and males. Finally, the rapid actions of neuroestrogens on NCM auditory processing appear to be mediated by a nonclassical, membrane-bound estrogen receptor.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Jannath Begum-Ali ◽  
◽  
Anna Kolesnik-Taylor ◽  
Isabel Quiroz ◽  
Luke Mason ◽  
...  

Abstract Background Sensory modulation difficulties are common in children with conditions such as Autism Spectrum Disorder (ASD) and could contribute to other social and non-social symptoms. Positing a causal role for sensory processing differences requires observing atypical sensory reactivity prior to the emergence of other symptoms, which can be achieved through prospective studies. Methods In this longitudinal study, we examined auditory repetition suppression and change detection at 5 and 10 months in infants with and without Neurofibromatosis Type 1 (NF1), a condition associated with higher likelihood of developing ASD. Results In typically developing infants, suppression to vowel repetition and enhanced responses to vowel/pitch change decreased with age over posterior regions, becoming more frontally specific; age-related change was diminished in the NF1 group. Whilst both groups detected changes in vowel and pitch, the NF1 group were largely slower to show a differentiated neural response. Auditory responses did not relate to later language, but were related to later ASD traits. Conclusions These findings represent the first demonstration of atypical brain responses to sounds in infants with NF1 and suggest they may relate to the likelihood of later ASD.


2021 ◽  
Author(s):  
Shannon L.M. Heald ◽  
Stephen C. Van Hedger ◽  
John Veillette ◽  
Katherine Reis ◽  
Joel S. Snyder ◽  
...  

AbstractThe ability to generalize rapidly across specific experiences is vital for robust recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Behavioral research has demonstrated that listeners are rapidly able to generalize their experience with a talker’s speech and quickly improve understanding of a difficult-to-understand talker without prolonged practice, e.g., even after a single training session. Here, we examine the differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker using a Pretest-Posttest design with electroencephalography (EEG). Participants were trained using either (1) a large inventory of words where no words repeated across the experiment (generalized learning) or (2) a small inventory of words where words repeated (rote learning). Analysis of long-latency auditory evoked potentials at Pretest and Posttest revealed that while rote and generalized learning both produce rapid changes in auditory processing, the nature of these changes differed. In the context of adapting to a talker, generalized learning is marked by an amplitude reduction in the N1-P2 complex and by the presence of a late-negative (LN) wave in the auditory evoked potential following training. Rote learning, however, is marked only by temporally later source configuration changes. The early N1-P2 change, found only for generalized learning, suggests that generalized learning relies on the attentional system to reorganize the way acoustic features are selectively processed. This change in relatively early sensory processing (i.e. during the first 250ms) is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to adaptively tune early auditory processing sensitivity.Statement of SignificancePrevious research on perceptual learning has typically examined neural responses during rote learning: training and testing is carried out with the same stimuli. As a result, it is not clear that findings from these studies can explain learning that generalizes to novel patterns, which is critical in speech perception. Are neural responses to generalized learning in auditory processing different from neural responses to rote learning? Results indicate rote learning of a particular talker’s speech involves brain regions focused on the memory encoding and retrieving of specific learned patterns, whereas generalized learning involves brain regions involved in reorganizing attention during early sensory processing. In learning speech from a novel talker, only generalized learning is marked by changes in the N1-P2 complex (reflective of secondary auditory cortical processing). The results are consistent with the view that robust speech perception relies on the fast adjustment of attention mechanisms to adaptively tune auditory sensitivity to cope with acoustic variability.


2020 ◽  
Vol 14 ◽  
Author(s):  
Guangfei Li ◽  
Yu Chen ◽  
Wuyi Wang ◽  
Isha Dhingra ◽  
Simon Zhornitsky ◽  
...  

2004 ◽  
Vol 91 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian mesencephalicus lateralis, dorsalis (MLd) is the auditory midbrain nucleus in which multiple parallel inputs from lower brain stem converge and through which most auditory information passes to reach the forebrain. Auditory processing in the MLd has not been investigated in songbirds. We studied the tuning properties of single MLd neurons in adult male zebra finches. Pure tones were used to examine tonotopy, temporal response patterns, frequency coding, intensity coding, spike latencies, and duration tuning. Most neurons had no spontaneous activity. The tonotopy of MLd is like that of other birds and mammals; characteristic frequencies (CFs) increase in a dorsal to ventral direction. Four major response patterns were found: 1) onset (49% of cells); 2) primary-like (20%); 3) sustained (19%); and 4) primary-like with notch (12%). CFs ranged between 0.9 and 6.1 kHz, matching the zebra finch hearing range and the power spectrum of song. Tuning curves were generally V-shaped, but complex curves, with multiple peaks or noncontiguous excitatory regions, were observed in 22% of cells. Rate-level functions indicated that 51% of nononset cells showed monotonic relationships between spike rate and sound level. Other cells showed low saturation or nonmonotonic responses. Spike latencies ranged from 4 to 40 ms, measured at CF. Spike latencies generally decreased with increasing sound pressure level (SPL), although paradoxical latency shifts were observed in 16% of units. For onset cells, changes in SPL produced smaller latency changes than for cells showing other response types. Results suggest that auditory midbrain neurons may be particularly suited for processing temporally complex signals with a high degree of precision.


2021 ◽  
Author(s):  
Floria M.K. Uy ◽  
Christopher M. Jernigan ◽  
Natalie C. Zaba ◽  
Eshan Mehrotra ◽  
Sara E. Miller ◽  
...  

ABSTRACTSocial interactions have large effects on individual physiology and fitness. In the immediate sense, social stimuli are often highly salient and engaging. Over longer time scales, competitive interactions often lead to distinct social ranks and differences in physiology and behavior. Understanding how initial responses lead to longer-term effects of social interactions requires examining the changes in responses over time. Here we examined the effects of social interactions on transcriptomic signatures at two points, at the end of a 45-minute interaction and 4 hours later, in female Polistes fuscatus paper wasp foundresses. Female P. fuscatus have variable facial patterns that are used for visual individual recognition, so we separately examined the transcriptional dynamics in the optic lobe and the central brain. Results demonstrate much stronger transcriptional responses to social interactions in the central brain compared to the optic lobe. Differentially regulated genes in response to social interactions are enriched for memory-related transcripts. Comparisons between winners and losers of the encounters revealed similar overall transcriptional profiles at the end of an interaction, which significantly diverged over the course of 4 hours, with losers showing changes in expression levels of genes associated with aggression and reproduction in paper wasps. On nests, subordinate foundresses are less aggressive, do more foraging and lay fewer eggs compared to dominant foundresses and we find losers shift expression of many genes, including vitellogenin, related to aggression, worker behavior, and reproduction within hours of losing an encounter. These results highlight the early neurogenomic changes that likely contribute to behavioral and physiological effects of social status changes in a social insect.


2019 ◽  
Author(s):  
Kelly K Chong ◽  
Alex G Dunlap ◽  
Dorottya B Kacsoh ◽  
Robert C Liu

SUMMARYFrequency modulations are an inherent feature of many behaviorally relevant sounds, including vocalizations and music. Changing trajectories in a sound’s frequency often conveys meaningful information, which can be used to differentiate sound categories, as in the case of intonations in tonal languages. However, it is not clear what features of the neural responses in what parts of the auditory cortical pathway might be more important for conveying information about behaviorally relevant frequency modulations, and how these responses change with experience. Here we uncover tuning to subtle variations in frequency trajectories in mouse auditory cortex. Surprisingly, we found that auditory cortical responses could be modulated by variations in a pure tone trajectory as small as 1/24th of an octave. Offset spiking accounted for a significant portion of tuned responses to subtle frequency modulation. Offset responses that were present in the adult A2, but not those in Core auditory cortex, were plastic in a way that enhanced the representation of an acquired behaviorally relevant sound category, which we illustrate with the maternal mouse paradigm for natural communication sound learning. By using this ethologically inspired sound-feature tuning paradigm to drive auditory responses in higher-order neurons, our results demonstrate that auditory cortex can track much finer frequency modulations than previously appreciated, which allows A2 offset responses in particular to attune to the pitch trajectories that distinguish behaviorally relevant, natural sound categories.


2021 ◽  
Author(s):  
Sudha Sharma ◽  
Hemant Kumar Srivastava ◽  
Sharba Bandyopadhyay

AbstractSo far, our understanding on the role of the auditory cortex (ACX) in processing visual information has been limited to infragranular layers of the ACX, which have been shown to respond to visual stimulation. Here, we investigate the neurons in supragranular layers of the mouse ACX using 2-photon calcium imaging. Contrary to previous reports, here we show that more than 20% of responding neurons in layer2/3 of the ACX respond to full-field visual stimulation. These responses occur by both excitation and hyperpolarization. The primary ACX (A1) has a greater proportion of visual responses by hyperpolarization compared to excitation likely driven by inhibitory neurons of the infragranular layers of the ACX rather than local layer 2/3 inhibitory neurons. Further, we found that more than 60% of neurons in the layer 2/3 of A1 are multisensory in nature. We also show the presence of multisensory neurons in close proximity to exclusive auditory neurons and that there is a reduction in the noise correlations of the recorded neurons during multisensory presentation. This is evidence in favour of deep and intricate visual influence over auditory processing. The results have strong implications for decoding visual influences over the early auditory cortical regions.Significance statementTo understand, what features of our visual world are processed in the auditory cortex (ACX), understanding response properties of auditory cortical neurons to visual stimuli is important. Here, we show the presence of visual and multisensory responses in the supragranular layers of the ACX. Hyperpolarization to visual stimulation is more commonly observed in the primary ACX. Multisensory stimulation results in suppression of responses compared to unisensory stimulation and an overall decrease in noise correlation in the primary ACX. The close-knit architecture of these neurons with auditory specific neurons suggests the influence of non-auditory stimuli on the auditory processing.


Sign in / Sign up

Export Citation Format

Share Document