scholarly journals Auditory fear conditioning alters neural gain in the cochlear nucleus: a wireless neural recording study in freely behaving rats

2020 ◽  
Vol 4 (4) ◽  
Author(s):  
Antonio G. Paolini ◽  
Simeon J. Morgan ◽  
Jee Hyun Kim

Abstract Anxiety disorders involve distorted perception of the world including increased saliency of stress-associated cues. However, plasticity in the initial sensory regions of the brain following a fearful experience has never been examined. The cochlear nucleus (CN) is the first station in the central auditory system, with heterogeneous collections of neurons that not only project to but also receive projections from cortico-limbic regions, suggesting a potential for experience-dependent plasticity. Using wireless neural recordings in freely behaving rats, we demonstrate for the first time that neural gain in the CN is significantly altered by fear conditioning to auditory sequences. Specifically, the ventral subnuclei significantly increased firing rate to the conditioned tone sequence, while the dorsal subnuclei significantly decreased firing rate during the conditioning session overall. These findings suggest subregion-specific changes in the balance of inhibition and excitation in the CN as a result of conditioning experience. Heart rate was measured as the conditioned response (CR), which showed that while pre-conditioned stimulus (CS) responding did not change across baseline and conditioning sessions, significant changes in heart rate were observed to the tone sequence followed by shock. Heart-rate findings support acquisition of conditioned fear. Taken together, the present study presents first evidence for potential experience-dependent changes in auditory perception that involve novel plasticity within the first site of processing auditory information in the brain.

Author(s):  
Robert V. Shannon

The auditory brainstem implant (ABI) is a surgically implanted device to electrically stimulate auditory neurons in the cochlear nucleus complex of the brainstem in humans to restore hearing sensations. The ABI is similar in function to a cochlear implant, but overall outcomes are poorer. However, recent applications of the ABI to new patient populations and improvements in surgical technique have led to significant improvements in outcomes. While the ABI provides hearing benefits to patients, the outcomes challenge our understanding of how the brain processes neural patterns of auditory information. The neural pattern of activation produced by an ABI is highly unnatural, yet some patients achieve high levels of speech understanding. Based on a meta-analysis of ABI surgeries and outcomes, a theory is proposed of a specialized sub-system of the cochlear nucleus that is critical for speech understanding.


2020 ◽  
Vol 129 (4) ◽  
pp. 846-854
Author(s):  
Brandon L. Stone ◽  
Madison Beneda-Bender ◽  
Duncan L. McCollum ◽  
Jongjoo Sun ◽  
Joseph H. Shelley ◽  
...  

The executive functioning aspect of cognition was evaluated during graded exercise in Reserve Officers’ Training Corps cadets. Executive function declined at exercise intensities of ≥80% of heart rate reserve. The decline in executive function was coupled with declines in the oxygenation of the prefrontal cortex, the brain region responsible for executive functioning. These data define the executive function-exercise intensity relationship and provide evidence supporting the reticular activation hypofrontality theory as a model of cognitive change.


1986 ◽  
Vol 56 (2) ◽  
pp. 261-286 ◽  
Author(s):  
W. S. Rhode ◽  
P. H. Smith

Physiological response properties of neurons in the ventral cochlear nucleus have a variety of features that are substantially different from the stereotypical auditory nerve responses that serve as the principal source of activation for these neurons. These emergent features are the result of the varying distribution of auditory nerve inputs on the soma and dendrites of the various cell types within the nucleus; the intrinsic membrane characteristics of the various cell types causing different responses to the same input in different cell types; and secondary excitatory and inhibitory inputs to different cell types. Well-isolated units were recorded with high-impedance glass microelectrodes, both intracellularly and extracellularly. Units were characterized by their temporal response to short tones, rate vs. intensity relation, and response areas. The principal response patterns were onset, chopper, and primary-like. Onset units are characterized by a well-timed first spike in response to tones at the characteristic frequency. For frequencies less than 1 kHz, onset units can entrain to the stimulus frequency with greater precision than their auditory nerve inputs. This implies that onset units receive converging inputs from a number of auditory nerve fibers. Onset units are divided into three subcategories, OC, OL, and OI. OC units have extraordinarily wide dynamic ranges and low-frequency selectivity. Some are capable of sustaining firing rates of 800 spikes/s at high intensities. They have the smallest standard deviation and coefficient of variation of the first spike latency of any cells in the cochlear nuclei. OC units are candidates for encoding intensity. OI and OL units differ from OC units in that they have dynamic ranges and frequency selectivity ranges much like those of auditory nerve fibers. They differ from one another in their steady-state firing rates; OI units fire mainly at the onset of a tone. OI units also differ from OL units in that they prefer frequency sweeps in the low to high direction. Primary-like-with-notch (PLN) units also respond to tones with a well-timed first spike. They differ from onset cells in that the onset peak is not always as precise as the spontaneous rate is higher. A comparison of spontaneous firing rate and saturation firing rate of PLN units with auditory nerve fibers suggest that PLN units receive one to four auditory nerve fiber inputs. Chopper units fire in a sustained regular manner when they are excited by sound.(ABSTRACT TRUNCATED AT 400 WORDS)


2021 ◽  
Author(s):  
Stephanie Brandl ◽  
Niels Trusbak Haumann ◽  
Simjon Radloff ◽  
Sven Dähne ◽  
Leonardo Bonetti ◽  
...  

AbstractWe propose here (the informed use) of a customised, data-driven machine-learning pipeline to analyse magnetoencephalography (MEG) in a theoretical source space, with respect to the processing of a regular beat. This hypothesis- and data-driven analysis pipeline allows us to extract the maximally relevant components in MEG source-space, with respect to the oscillatory power in the frequency band of interest and, most importantly, the beat-related modulation of that power. Our pipeline combines Spatio-Spectral Decomposition as a first step to seek activity in the frequency band of interest (SSD, [1]) with a Source Power Co-modulation analysis (SPoC; [2]), which extracts those components that maximally entrain their activity with the given target function, that is here with the periodicity of the beat in the frequency domain (hence, f-SPoC). MEG data (102 magnetometers) from 28 participants passively listening to a 5-min long regular tone sequence with a 400 ms beat period (the “target function” for SPoC) were segmented into epochs of two beat periods each to guarantee a sufficiently long time window. As a comparison pipeline to SSD and f-SpoC, we carried out a state-of-the-art cluster-based permutation analysis (CBPA, [3]). The time-frequency analysis (TFA) of the extracted activity showed clear regular patterns of periodically occurring peaks and troughs across the alpha and beta band (8-20 Hz) in the f-SPoC but not in the CBPA results, and both the depth and the specificity of modulation to the beat frequency yielded a significant advantage. Future applications of this pipeline will address target the relevance to behaviour and inform analogous analyses in the EEG, in order to finally work toward addressing dysfunctions in beat-based timing and their consequences.Author summaryWhen listening to a regular beat, oscillations in the brain have been shown to synchronise with the frequency of that given beat. This phenomenon is called entrainment and has in previous brain-imaging studies been shown in the form of one peak and trough per beat cycle in a range of frequency bands within 15-25 Hz (beta band). Using machine-learning techniques, we designed an analysis pipeline based on Source-Power Co-Modulation (SPoC) that enables us to extract spatial components in MEG recordings that show these synchronisation effects very clearly especially across 8-20 Hz. This approach requires no anatomical knowledge of the individual or even the average brain, it is purely data driven and can be applied in a hypothesis-driven fashion with respect to the “function” that we expect the brain to entrain with and the frequency band within which we expect to see this entrainment. We here apply our customised pipeline using “f-SPoC” to MEG recordings from 28 participants passively listening to a 5-min long tone sequence with a regular 2.5 Hz beat. In comparison to a cluster-based permutation analysis (CBPA) which finds sensors that show statistically significant power modulations across participants, our individually extracted f-SPoC components find a much stronger and clearer pattern of peaks and troughs within one beat cycle. In future work, this pipeline can be implemented to tackle more complex “target functions” like speech and music, and might pave the way toward rhythm-based rehabilitation strategies.


Biofeedback ◽  
2021 ◽  
Vol 49 (4) ◽  
pp. 86-88
Author(s):  
Leah M. Lagos

Postconcussion syndrome is a devastating condition of the mind, body, and even personality. Mounting research demonstrates that heart rate variability biofeedback can help the concussed individual in three critical ways: (a) eliciting high amplitude oscillations in cardiovascular functions and thereby strengthening self-regulatory control mechanisms; (b) restoring autonomic balance; and (c) increasing the afferent impulse stream from the baroreceptors to restore balance between inhibitory and excitatory processes in the brain.


Hypertension ◽  
2000 ◽  
Vol 36 (suppl_1) ◽  
pp. 727-727
Author(s):  
Ovidiu Baltatu ◽  
Ben J Janssen ◽  
Ralph Plehm ◽  
Detlev Ganten ◽  
Michael Bader

P191 The brain renin-angiotensin system (RAS) system may play a functional role in the long-term and short-term control of blood pressure (BPV) and heart rate variability (HRV). To study this we recorded in transgenic rats TGR(ASrAOGEN) with low brain angiotensinogen levels the 24-h variation of BP and HR during basal and hypertensive conditions, induced by a low-dose s.c. infusion of angiotensin II (Ang II, 100 ng/kg/min) for 7 days. Cardiovascular parameters were monitored by telemetry. Short-term BPV and HRV were evaluated by spectral analysis and as a measure of baroreflex sensitivity the transfer gain between the pressure and heart rate variations was calculated. During the Ang II infusion, in SD but not TGR(ASrAOGEN) rats, the 24-h rhythm of BP was inverted (5.8 ± 2 vs. -0.4 ± 1.8 mm Hg/group of day-night differences of BP, p< 0.05, respectively). In contrast, in both the SD and TGR(ASrAOGEN) rats, the 24-h HR rhythms remained unaltered and paralleled those of locomotor activity. The increase of systolic BP was significantly reduced in TGR(ASrAOGEN) in comparison to SD rats as previously described, while the HR was not altered in TGR(ASrAOGEN) nor in SD rats. The spectral index of baroreflex sensitivity (FFT gain between 0.3-0.6 Hz) was significantly higher in TGR(ASrAOGEN) than SD rats during control (0.71 ± 0.1 vs. 0.35 ± 0.06, p<0.05), but not during Ang II infusion (0.6 ± 0.07 vs. 0.4 ± 0.1, p>0.05). These results demonstrate that the brain RAS plays an important role in mediating the effects of Ang II on the circadian variation of BP. Furthermore these data are consistent with the view that the brain RAS modulates baroreflex control of HR in rats, with AII having an inhibitory role.


Author(s):  
Eric D. Young ◽  
Donata Oertel

Neuronal circuits in the brainstem convert the output of the ear, which carries the acoustic properties of ongoing sound, to a representation of the acoustic environment that can be used by the thalamocortical system. Most important, brainstem circuits reflect the way the brain uses acoustic cues to determine where sounds arise and what they mean. The circuits merge the separate representations of sound in the two ears and stabilize them in the face of disturbances such as loudness fluctuation or background noise. Embedded in these systems are some specialized analyses that are driven by the need to resolve tiny differences in the time and intensity of sounds at the two ears and to resolve rapid temporal fluctuations in sounds like the sequence of notes in music or the sequence of syllables in speech.


Sign in / Sign up

Export Citation Format

Share Document