Early Experience Determines How the Senses Will Interact

2007 ◽  
Vol 97 (1) ◽  
pp. 921-926 ◽  
Author(s):  
Mark T. Wallace ◽  
Barry E. Stein

Multisensory integration refers to the process by which the brain synthesizes information from different senses to enhance sensitivity to external events. In the present experiments, animals were reared in an altered sensory environment in which visual and auditory stimuli were temporally coupled but originated from different locations. Neurons in the superior colliculus developed a seemingly anomalous form of multisensory integration in which spatially disparate visual-auditory stimuli were integrated in the same way that neurons in normally reared animals integrated visual-auditory stimuli from the same location. The data suggest that the principles governing multisensory integration are highly plastic and that there is no a priori spatial relationship between stimuli from different senses that is required for their integration. Rather, these principles appear to be established early in life based on the specific features of an animal's environment to best adapt it to deal with that environment later in life.

2012 ◽  
Vol 108 (11) ◽  
pp. 2863-2866 ◽  
Author(s):  
Diana K. Sarko ◽  
Dipanwita Ghose

Normal sensory experience is necessary for the development of multisensory processing, such that disruption through environmental manipulations eliminates or alters multisensory integration. In this Neuro Forum, we examine the recent paper by Xu et al. ( J Neurosci 32: 2287–2298, 2012) which proposes that the statistics of cross-modal stimuli encountered early in life might be a driving factor for the development of normal multisensory integrative abilities in superior colliculus neurons. We present additional interpretations of their analyses as well as future directions and translational implications of this study for understanding the neural substrates and plasticity inherent to multisensory processing.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 369-369
Author(s):  
B E Stein

That sensory cues in one modality affect perception in another has been known for some time, and there are many examples of ‘intersensory’ influences within the broad phenomenon of cross-modal integration. The ability of the CNS to integrate cues from different sensory channels is particularly evident in the facilitated detection and reaction to combinations of concordant cues from different modalities, and in the dramatic perceptual anomalies that can occur when these cues are discordant. A substrate for multisensory integration is provided by the many CNS neurons (eg, in the superior colliculus) which receive convergent input from multiple sensory modalities. Similarities in the principles by which these neurons integrate multisensory information in different species point to a remarkable conservation in the integrative features of the CNS during vertebrate evolution. In general, profound enhancement or depression in neural activity can be induced in the same neuron, depending on the spatial and temporal relationships among the stimuli presented to it. The specific response product obtained in any given multisensory neuron is predictable on the basis of the features of its various receptive fields. Perhaps most striking, however, is the parallel which has been demonstrated between the properties of multisensory integration at the level of the single neuron in the superior colliculus and at the level of overt attentive and orientation behaviour.


Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chih-Wei Lin ◽  
Yu Hong ◽  
Jinfu Liu

Abstract Background Glioma is a malignant brain tumor; its location is complex and is difficult to remove surgically. To diagnosis the brain tumor, doctors can precisely diagnose and localize the disease using medical images. However, the computer-assisted diagnosis for the brain tumor diagnosis is still the problem because the rough segmentation of the brain tumor makes the internal grade of the tumor incorrect. Methods In this paper, we proposed an Aggregation-and-Attention Network for brain tumor segmentation. The proposed network takes the U-Net as the backbone, aggregates multi-scale semantic information, and focuses on crucial information to perform brain tumor segmentation. To this end, we proposed an enhanced down-sampling module and Up-Sampling Layer to compensate for the information loss. The multi-scale connection module is to construct the multi-receptive semantic fusion between encoder and decoder. Furthermore, we designed a dual-attention fusion module that can extract and enhance the spatial relationship of magnetic resonance imaging and applied the strategy of deep supervision in different parts of the proposed network. Results Experimental results show that the performance of the proposed framework is the best on the BraTS2020 dataset, compared with the-state-of-art networks. The performance of the proposed framework surpasses all the comparison networks, and its average accuracies of the four indexes are 0.860, 0.885, 0.932, and 1.2325, respectively. Conclusions The framework and modules of the proposed framework are scientific and practical, which can extract and aggregate useful semantic information and enhance the ability of glioma segmentation.


2013 ◽  
Vol 310 ◽  
pp. 660-664 ◽  
Author(s):  
Zi Guang Li ◽  
Guo Zhong Liu

As an emerging technology, brain-computer interface (BCI) bring us a novel communication channel which translate brain activities into command signals for devices like computer, prosthesis, robots, and so forth. The aim of the brain-computer interface research is to improve the quality life of patients who are suffering from server neuromuscular disease. This paper focus on analyzing the different characteristics of the brainwaves when a subject responses “yes” or “no” to auditory stimulation questions. The experiment using auditory stimuli of form of asking questions is adopted. The extraction of the feature adopted the method of common spatial patterns(CSP) and the classification used support vector machine (SVM) . The classification accuracy of "yes" and "no" answers achieves 80.2%. The experiment result shows the feasibility and effectiveness of this solution and provides a basis for advanced research .


1998 ◽  
Vol 80 (2) ◽  
pp. 1006-1010 ◽  
Author(s):  
Mark T. Wallace ◽  
M. Alex Meredith ◽  
Barry E. Stein

Wallace, Mark T., M. Alex Meredith, and Barry E. Stein. Multisensory integration in the superior colliculus of the alert cat. J. Neurophysiol. 80: 1006–1010, 1998. The modality convergence patterns, sensory response properties, and principles governing multisensory integration in the superior colliculus (SC) of the alert cat were found to have fundamental similarities to those in anesthetized animals. Of particular interest was the observation that, in a manner indistinguishable from the anesthetized animal, combinations of two different sensory stimuli significantly enhanced the responses of SC neurons above those evoked by either unimodal stimulus. These observations are consistent with the speculation that there is a functional link among multisensory integration in individual SC neurons and cross-modality attentive and orientation behaviors.


2016 ◽  
Vol 2 (8) ◽  
pp. e1501070 ◽  
Author(s):  
Liu Zhou ◽  
Teng Leng Ooi ◽  
Zijiang J. He

Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers’ heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual’s accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.


2011 ◽  
Vol 106 (4) ◽  
pp. 1862-1874 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.


Sign in / Sign up

Export Citation Format

Share Document