scholarly journals Early Hearing-Impairment Results in Crossmodal Reorganization of Ferret Core Auditory Cortex

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
M. Alex Meredith ◽  
Brian L. Allman

Numerous investigations of cortical crossmodal plasticity, most often in congenital or early-deaf subjects, have indicated that secondary auditory cortical areas reorganize to exhibit visual responsiveness while the core auditory regions are largely spared. However, a recent study of adult-deafened ferrets demonstrated that core auditory cortex was reorganized by the somatosensory modality. Because adult animals have matured beyond their critical period of sensory development and plasticity, it was not known if adult-deafening and early-deafening would generate the same crossmodal results. The present study used young, ototoxically-lesioned ferrets (n=3) that, after maturation (avg. = 173 days old), showed significant hearing deficits (avg. threshold = 72 dB SPL). Recordings from single-units (n=132) in core auditory cortex showed that 72% were activated by somatosensory stimulation (compared to 1% in hearing controls). In addition, tracer injection into early hearing-impaired core auditory cortex labeled essentially the same auditory cortical and thalamic projection sources as seen for injections in the hearing controls, indicating that the functional reorganization was not the result of new or latent projections to the cortex. These data, along with similar observations from adult-deafened and adult hearing-impaired animals, support the recently proposed brainstem theory for crossmodal plasticity induced by hearing loss.

2011 ◽  
Vol 105 (4) ◽  
pp. 1558-1573 ◽  
Author(s):  
Yu-Ting Mao ◽  
Tian-Miao Hua ◽  
Sarah L. Pallas

Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.


2017 ◽  
Author(s):  
Krishna C. Puvvada ◽  
Jonathan Z. Simon

AbstractThe ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically-based representations in the auditory nerve, into perceptually distinct auditory-objects based representation in auditory cortex. Here, using magnetoencephalography (MEG) recordings from human subjects, both men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in auditory cortex contain dominantly spectro-temporal based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. In contrast, we also show that higher order auditory cortical areas represent the attended stream separately, and with significantly higher fidelity, than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Taken together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of human auditory cortex.Significance StatementUsing magnetoencephalography (MEG) recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of auditory cortex. We show that the primary-like areas in auditory cortex use a dominantly spectro-temporal based representation of the entire auditory scene, with both attended and ignored speech streams represented with almost equal fidelity. In contrast, we show that higher order auditory cortical areas represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.


2013 ◽  
Vol 76 (4) ◽  
pp. 207-212 ◽  
Author(s):  
Masao Horie ◽  
Hiroaki Tsukano ◽  
Ryuichi Hishida ◽  
Hirohide Takebayashi ◽  
Katsuei Shibuki

2000 ◽  
Vol 20 (16) ◽  
pp. 6106-6116 ◽  
Author(s):  
V. Bess Aramakis ◽  
Candace Y. Hsieh ◽  
Frances M. Leslie ◽  
Raju Metherate

2019 ◽  
Vol 30 (4) ◽  
pp. 2586-2599 ◽  
Author(s):  
Stitipragyan Bhumika ◽  
Mari Nakamura ◽  
Patricia Valerio ◽  
Magdalena Solyga ◽  
Henrik Lindén ◽  
...  

Abstract Neuronal circuits are shaped by experience during time windows of increased plasticity in postnatal development. In the auditory system, the critical period for the simplest sounds—pure frequency tones—is well defined. Critical periods for more complex sounds remain to be elucidated. We used in vivo electrophysiological recordings in the mouse auditory cortex to demonstrate that passive exposure to frequency modulated sweeps (FMS) from postnatal day 31 to 38 leads to long-term changes in the temporal representation of sweep directions. Immunohistochemical analysis revealed a decreased percentage of layer 4 parvalbumin-positive (PV+) cells during this critical period, paralleled with a transient increase in responses to FMS, but not to pure tones. Preventing the PV+ cell decrease with continuous white noise exposure delayed the critical period onset, suggesting a reduction in inhibition as a mechanism for this plasticity. Our findings shed new light on the dependence of plastic windows on stimulus complexity that persistently sculpt the functional organization of the auditory cortex.


2007 ◽  
Vol 58 ◽  
pp. S100
Author(s):  
Junsei Horikawa ◽  
Ryota Numata ◽  
Daisuke Uchiyama ◽  
Shunji Sugimoto

Perception ◽  
10.1068/p5841 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1419-1430 ◽  
Author(s):  
Troy A Hackett ◽  
John F Smiley ◽  
Istvan Ulbert ◽  
George Karmos ◽  
Peter Lakatos ◽  
...  

The auditory cortex of nonhuman primates is comprised of a constellation of at least twelve interconnected areas distributed across three major regions on the superior temporal gyrus: core, belt, and parabelt. Individual areas are distinguished on the basis of unique profiles comprising architectonic features, thalamic and cortical connections, and neuron response properties. Recent demonstrations of convergent auditory – somatosensory interactions in the caudomedial (CM) and caudolateral (CL) belt areas prompted us to pursue anatomical studies to identify the source(s) of somatic input to auditory cortex. Corticocortical and thalamocortical connections were revealed by injecting neuroanatomical tracers into CM, CL, and adjoining fields of marmoset ( Callithrix jacchus jacchus) and macaque ( Macaca mulatta) monkeys. In addition to auditory cortex, the cortical connections of CM and CL included somatosensory (retroinsular, Ri; granular insula, Ig) and multisensory areas (temporal parietal occipital, temporal parietal temporal). Thalamic inputs included the medial geniculate complex and several multisensory nuclei (supra- geniculate, posterior, limitans, medial pulvinar), but not the ventroposterior complex. Injections of the core (A1, R) and rostromedial areas of auditory cortex revealed sparse multisensory connections. The results suggest that areas Ri and Ig are the principle sources of somatosensory input to the caudal belt, while multisensory regions of cortex and thalamus may also contribute. The present data add to growing evidence of multisensory convergence in cortical areas previously considered to be ‘unimodal’, and also indicate that auditory cortical areas differ in this respect.


2006 ◽  
Vol 14 (03) ◽  
pp. 369-378 ◽  
Author(s):  
YOH-ICHI FUJISAKA ◽  
SEIJI NAKAGAWA ◽  
MITSUO TONOIKE

This paper describes the relationship between the eigenfrequencies of CT scanned realistic human head model and the subjective detecting pitch, which is given by providing the bone-conducted ultrasound. Our goal is to develop the optimal bone-conducted ultrasonic hearing aid for profoundly hearing-impaired persons. An ascent of a speech intelligibility is the requirement of hearing aid. To improve it, the perception mechanism of the bone-conducted ultrasound must be clarified, but the conclusive agreement of it has not been reached yet, although many hypotheses were reported. The authors feel an interest in the detecting pitch of bone-conducted ultrasound with no frequency-dependence and predict that the cochleae are related to the perception mechanism for bone-conducted ultrasound, since it has been verified that the auditory cortex responds to bone-conducted ultrasound by MEG study. In this paper, waves propagating from the mastoid to both cochleae are numerically analyzed and the characteristics of transfer functions are estimated as a first step to clarifying the perception mechanism for detecting pitch of bone-conducted ultrasonic stimuli.


Sign in / Sign up

Export Citation Format

Share Document