scholarly journals Blocking neuroestrogen synthesis modifies neural representations of learned song without altering vocal imitation accuracy in developing songbirds

2019 ◽  
Author(s):  
Daniel M. Vahaba ◽  
Amelia Hecsh ◽  
Luke Remage-Healey

ABSTRACTBirdsong, like human speech, is learned early in life by first memorizing an auditory model. Once memorized, birds compare their own burgeoning vocalizations to their auditory memory, and adjust their song to match the model. While much is known about this latter part of vocal learning, less is known about how initial auditory experiences are formed and consolidated. In both adults and developing songbirds, there is strong evidence suggesting the caudomedial nidopallium (NCM), a higher order auditory forebrain area, is the site of auditory memory consolidation. However, the mechanisms that facilitate this consolidation are poorly understood. One likely mechanism is 17β-estradiol (E2), which is associated with speech-language development and disorders in humans, and is abundant in both mammalian temporal cortex and songbird NCM. Circulating E2 is also elevated during the auditory memory phase, and in NCM immediately after song learning sessions, suggesting it functions to encode recent auditory experience. Therefore, we tested a role for E2 production in auditory memory consolidation during development using a comprehensive set of investigations to ask this question at the level of neuroanatomy, neurophysiology, and behavior. Our results demonstrate that while systemic estrogen synthesis blockade regulates juvenile song production, inhibiting E2 synthesis locally within NCM does not adversely affect song learning outcomes. Surprisingly, early life E2 manipulations in NCM modify the neural representations of birds’ own song and the model tutor song in both NCM and a downstream sensorimotor nucleus (HVC). Further, we show that the capacity to synthesize neuroestrogens remains high throughout development alongside substantial changes in NCM cell density across age. Taken together, these findings suggest that E2 plays a multifaceted role during development, and demonstrate that contrary to prediction, unilateral post-training estrogen synthesis blockade in the auditory cortex does not negatively impact vocal learning. Acute downregulation of neuroestrogens are therefore likely permissive for juvenile auditory memorization, while neuroestrogen synthesis influences communication production and representation in adulthood.

2020 ◽  
Author(s):  
Ha Na Choe ◽  
Jeevan Tewari ◽  
Kevin W. Zhu ◽  
Matthew Davenport ◽  
Hiroaki Matsunami ◽  
...  

AbstractSex hormones alter the organization of the brain during early development and coordinate various behaviors throughout life. In zebra finches, song learning is limited to males, and the associated song learning brain pathway only matures in males and atrophies in females. This atrophy can be reversed by giving females exogenous estrogen during early post-hatch development, but whether normal male song system development requires estrogen is uncertain. For the first time in songbirds, we administered exemestane, a potent third generation estrogen synthesis inhibitor, from the day of hatching until adulthood. We examined the behavior, brain, and transcriptome of individual song nuclei of these pharmacologically manipulated animals. We found that males with long-term exemestane treatment had diminished male-specific plumage, impaired song learning, but retained normal song nuclei sizes and most, but not all, of their specialized transcriptome. Consistent with prior findings, females with long-term estrogen treatment retained a functional song system, and we further observed their song nuclei had specialized gene expression profiles similar, but not identical to males. We also observed that different song nuclei responded to estrogen manipulation differently, with Area X in the striatum being the most altered by estrogen modulation. These findings support the hypothesis that song learning is an ancestral trait in both sexes, which was subsequently suppressed in females of some species, and that estrogen has come to play a critical role in modulating this suppression as well as refinement of song learning.


2012 ◽  
Vol 107 (4) ◽  
pp. 1142-1156 ◽  
Author(s):  
Vanessa C. Miller-Sims ◽  
Sarah W. Bottjer

Experience-dependent changes in neural connectivity underlie developmental learning and result in life-long changes in behavior. In songbirds axons from the cortical region LMANcore (core region of lateral magnocellular nucleus of anterior nidopallium) convey the output of a basal ganglia circuit necessary for song learning to vocal motor cortex [robust nucleus of the arcopallium (RA)]. This axonal projection undergoes remodeling during the sensitive period for learning to achieve topographic organization. To examine how auditory experience instructs the development of connectivity in this pathway, we compared the morphology of individual LMANcore→RA axon arbors in normal juvenile songbirds to those raised in white noise. The spatial extent of axon arbors decreased during the first week of vocal learning, even in the absence of normal auditory experience. During the second week of vocal learning axon arbors of normal birds showed a loss of branches and varicosities; in contrast, experience-deprived birds showed no reduction in branches or varicosities and maintained some arbors in the wrong topographic location. Thus both experience-independent and experience-dependent processes are necessary to establish topographic organization in juvenile birds, which may allow birds to modify their vocal output in a directed manner and match their vocalizations to a tutor song. Many LMANcore axons of juvenile birds, but not adults, extended branches into dorsal arcopallium (Ad), a region adjacent to RA that is part of a parallel basal ganglia pathway also necessary for vocal learning. This transient projection provides a point of integration between the two basal ganglia pathways, suggesting that these branches convey corollary discharge signals as birds are actively engaged in learning.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


2014 ◽  
Vol 281 (1781) ◽  
pp. 20132630 ◽  
Author(s):  
Mugdha Deshpande ◽  
Fakhriddin Pirlepesov ◽  
Thierry Lints

As in human infant speech development, vocal imitation in songbirds involves sensory acquisition and memorization of adult-produced vocal signals, followed by a protracted phase of vocal motor practice. The internal model of adult tutor song in the juvenile male brain, termed ‘the template’, is central to the vocal imitation process. However, even the most fundamental aspects of the template, such as when, where and how it is encoded in the brain, remain poorly understood. A major impediment to progress is that current studies of songbird vocal learning use protracted tutoring over days, weeks or months, complicating dissection of the template encoding process. Here, we take the key step of tightly constraining the timing of template acquisition. We show that, in the zebra finch, template encoding can be time locked to, on average, a 2 h period of juvenile life and based on just 75 s of cumulative tutor song exposure. Crucially, we find that vocal changes occurring on the day of training correlate with eventual imitative success. This paradigm will lead to insights on how the template is instantiated in the songbird brain, with general implications for deciphering how internal models are formed to guide learning of complex social behaviours.


Author(s):  
Robert C. Berwick

Language comprises a central component of a complex that is sometimes called “the human capacity.” This complex seems to have crystallized fairly recently among a small group in East Africa of whom people are all descendants. Common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization and auditory memory. There has been convergent evolution of the capacity for auditory-vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. Language’s recent evolutionary origin suggests that the computational machinery underlying syntax arose via the introduction of a single, simple, combinatorial operation. Further, the relation of a simple combinatorial syntax to the sensory-motor and thought systems reveals language to be asymmetric in design: while it precisely matches the representations required for inner mental thought, acting as the “glue” that binds together other internal cognitive and sensory modalities, at the same time it poses computational difficulties for externalization, that is, parsing and speech or signed production. Despite this mismatch, language syntax leads directly to the rich cognitive array that marks us as a symbolic species.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Xiaodong Li ◽  
Hiroshi Ishimoto ◽  
Azusa Kamikouchi

In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects.


2005 ◽  
Vol 94 (6) ◽  
pp. 3698-3707 ◽  
Author(s):  
Sarah W. Bottjer

Developmental changes in synaptic properties may act to limit neural and behavioral plasticity associated with sensitive periods. This study characterized synaptic maturation in a glutamatergic thalamo-cortical pathway that is necessary for vocal learning in songbirds. Lesions of the projection from medial dorsolateral nucleus of the thalamus (DLM) to the cortical nucleus lateral magnocellular nucleus of the anterior nidopallium (LMAN) greatly disrupt song behavior in juvenile birds during early stages of vocal learning. However, such lesions lose the ability to disrupt vocal behavior in normal birds at 60–70 days of age, around the time that selective auditory tuning for each bird’s own song (BOS) emerges in LMAN neurons. This pattern has suggested that LMAN is involved in processing song-related information and evaluating the degree to which vocal motor output matches the tutor song to be learned. Analysis of reversed excitatory postsynaptic currents at DLM→LMAN synapses in in vitro slice preparations revealed a pronounced N-methyl-d-aspartate receptor (NMDAR)-mediated component in both juvenile and adult cells with no developmental decrease in the relative contribution of NMDARs to synaptic transmission. However, the synaptic failure rate at DLM→LMAN synapses in juvenile males during the sensitive period for song learning was significantly lower at depolarized potentials than at hyperpolarized potentials. In contrast, the failure rate at DLM→LMAN synapses did not differ at hyper- versus depolarized holding potentials in adult males that had completed the acquisition of a stereotyped song. This pattern indicates that juvenile cells have a higher incidence of silent (NMDAR-only) synapses, which are postsynaptically silent at hyperpolarized potentials due to the voltage-dependent gating of NMDARs. Thus the decreased involvement of the LMAN pathway in vocal behavior is mirrored by a decline in the incidence of silent synapses but not by changes in the relative number of NMDA and AMPA receptors at DLM→LMAN synapses. These findings suggest that a developmental decrease in silent synapses within LMAN may represent a neural correlate of behavioral plasticity during song learning.


2021 ◽  
Author(s):  
Carlos A. Rodriguez-Saltos ◽  
Aditya Bhise ◽  
Prasanna Karur ◽  
Ramsha Nabihah Khan ◽  
Sumin Lee ◽  
...  

In songbirds, learning to sing is a highly social process that likely involves social reward. Here, we hypothesized that the degree to which a juvenile songbird learns a song depends on the degree to which it finds that song rewarding to hear during vocal development. We tested this hypothesis by measuring song preferences in young birds during song learning and then analyzing their adult songs. Song preferences were measured in an operant key-pressing assay. Juvenile male zebra finches (Taeniopygia guttata) had access to two keys, each of which was associated with a higher likelihood of playing the song of their father or that of another familiar adult ("neighbor"). To minimize the effects of exposure on learning, we implemented a reinforcement schedule that allowed us to detect preferences while balancing exposure to each song. On average, the juveniles significantly preferred the father's song early during song learning, before they were themselves singing. At around post-hatch day 60, their preference shifted to the neighbor's song. At the end of the song learning period, we recorded the juveniles' songs and compared them to the father's and the neighbor's song. All of the birds copied father's song. The accuracy with which the father's song was imitated was positively correlated with the peak strength of the preference for the father's song during the sensitive period. Our results show that preference for a social stimulus, in this case a vocalization, predicted social learning during development.


2019 ◽  
Author(s):  
Kamila M. Jozwik ◽  
Michael Lee ◽  
Tiago Marques ◽  
Martin Schrimpf ◽  
Pouya Bashivan

Image features computed by specific convolutional artificial neural networks (ANNs) can be used to make state-of-the-art predictions of primate ventral stream responses to visual stimuli.However, in addition to selecting the specific ANN and layer that is used, the modeler makes other choices in preprocessing the stimulus image and generating brain predictions from ANN features. The effect of these choices on brain predictivity is currently underexplored.Here, we directly evaluated many of these choices by performing a grid search over network architectures, layers, image preprocessing strategies, feature pooling mechanisms, and the use of dimensionality reduction. Our goal was to identify model configurations that produce responses to visual stimuli that are most similar to the human neural representations, as measured by human fMRI and MEG responses. In total, we evaluated more than 140,338 model configurations. We found that specific configurations of CORnet-S best predicted fMRI responses in early visual cortex, and CORnet-R and SqueezeNet models best predicted fMRI responses in inferior temporal cortex. We found specific configurations of VGG-16 and CORnet-S models that best predicted the MEG responses.We also observed that downsizing input images to ~50-75% of the input tensor size lead to better performing models compared to no downsizing (the default choice in most brain models for vision). Taken together, we present evidence that brain predictivity is sensitive not only to which ANN architecture and layer is used, but choices in image preprocessing and feature postprocessing, and these choices should be further explored.


Sign in / Sign up

Export Citation Format

Share Document