scholarly journals Greater benefits of multisensory integration during complex sensorimotor transformations

2012 ◽  
Vol 107 (11) ◽  
pp. 3135-3143 ◽  
Author(s):  
Verena N. Buchholz ◽  
Samanthi C. Goonetilleke ◽  
W. Pieter Medendorp ◽  
Brian D. Corneil

Multisensory integration enables rapid and accurate behavior. To orient in space, sensory information registered initially in different reference frames has to be integrated with the current postural information to produce an appropriate motor response. In some postures, multisensory integration requires convergence of sensory evidence across hemispheres, which would presumably lessen or hinder integration. Here, we examined orienting gaze shifts in humans to visual, tactile, or visuotactile stimuli when the hands were either in a default uncrossed posture or a crossed posture requiring convergence across hemispheres. Surprisingly, we observed the greatest benefits of multisensory integration in the crossed posture, as indexed by reaction time (RT) decreases. Moreover, such shortening of RTs to multisensory stimuli did not come at the cost of increased error propensity. To explain these results, we propose that two accepted principles of multisensory integration, the spatial principle and inverse effectiveness, dynamically interact to aid the rapid and accurate resolution of complex sensorimotor transformations. First, early mutual inhibition of initial visual and tactile responses registered in different hemispheres reduces error propensity. Second, inverse effectiveness in the integration of the weakened visual response with the remapped tactile representation expedites the generation of the correct motor response. Our results imply that the concept of inverse effectiveness, which is usually associated with external stimulus properties, might extend to internal spatial representations that are more complex given certain body postures.

2017 ◽  
Vol 117 (4) ◽  
pp. 1569-1580 ◽  
Author(s):  
Nienke B. Debats ◽  
Marc O. Ernst ◽  
Herbert Heuer

Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2. The biased position judgments’ variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Torrey LS Truszkowski ◽  
Oscar A Carrillo ◽  
Julia Bleier ◽  
Carolina M Ramirez-Vizcarrondo ◽  
Daniel L Felch ◽  
...  

To build a coherent view of the external world, an organism needs to integrate multiple types of sensory information from different sources, a process known as multisensory integration (MSI). Previously, we showed that the temporal dependence of MSI in the optic tectum of Xenopus laevis tadpoles is mediated by the network dynamics of the recruitment of local inhibition by sensory input (Felch et al., 2016). This was one of the first cellular-level mechanisms described for MSI. Here, we expand this cellular level view of MSI by focusing on the principle of inverse effectiveness, another central feature of MSI stating that the amount of multisensory enhancement observed inversely depends on the size of unisensory responses. We show that non-linear summation of crossmodal synaptic responses, mediated by NMDA-type glutamate receptor (NMDARs) activation, form the cellular basis for inverse effectiveness, both at the cellular and behavioral levels.


2009 ◽  
Vol 102 (5) ◽  
pp. 2755-2762 ◽  
Author(s):  
Sukhvinder S. Obhi ◽  
Shannon Matkovich ◽  
Robert Chen

Humans often have to modify the timing and/or type of their planned actions on the basis of new sensory information. In the present experiments, participants planned to make a right index finger keypress 3 s after a warning stimulus but on some trials were interrupted by a temporally unpredictable auditory tone prompting the same action ( experiment 1) or a different action ( experiment 2). In experiment 1, by comparing the reaction time (RT) to tones presented at different stages of the preparatory period to RT in a simple reaction time condition, we determined the cost of switching from an internally generated mode of response production to an externally triggered mode in situations requiring only a change in when an action is made (i.e., when the tone prompts the action at a different time from the intended time of action). Results showed that the cost occurred for interruption tones delivered 200 ms after a warning stimulus and remained relatively stable throughout most of the preparatory period with a reduction in the magnitude of the cost during the last 200 ms prior to the intended time of movement. In experiment 2, which included conditions requiring a change in both when and what action is produced on the tone, results show a larger cost when the switched to action is different from the action being prepared. We discuss our results in the light of neurophysiological experiments on motor preparation and suggest that intending to act is accompanied by a general inhibitory mechanism preventing premature motor output and a specific excitatory process pertaining to the intended movement. Interactions between these two mechanisms could account for our behavioral results.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2018 ◽  
Vol 6 (s1) ◽  
pp. S169-S181
Author(s):  
Marcello Costantini

Beyond the trivial assumption that without a body we cannot gather sensory information from the environment and we cannot act upon it, our particular body, right here, right now, both enables and constrains our perception of the environment. In this review, I provide empirical support for the idea that our physical body can narrow the set of our possible interactions with the environment by shaping the way we perceive stimuli around us. I will propose that such effects are contributed by the effect of our physical body—that is, flesh and bone body—on the oscillatory dynamics of intrinsic brain activity.


2012 ◽  
Vol 25 (0) ◽  
pp. 122
Author(s):  
Michael Barnett-Cowan ◽  
Jody C. Culham ◽  
Jacqueline C. Snow

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.


2004 ◽  
Vol 5 (3) ◽  
Author(s):  
Marie Avillac ◽  
Etienne Olivier ◽  
Sophie Den�ve ◽  
Suliann Ben Hamed ◽  
Jean-Ren� Duhamel

1997 ◽  
Vol 9 (2) ◽  
pp. 222-237 ◽  
Author(s):  
Alexandre Pouget ◽  
Terrence J. Sejnowski

Sensorimotor transformations are nonlinear mappings of sensory inputs to motor responses. We explore here the possibility that the responses of single neurons in the parietal cortex serve as basis functions for these transformations. Basis function decomposition is a general method for approximating nonlinear functions that is computationally efficient and well suited for adaptive modification. In particular, the responses of single parietal neurons can be approximated by the product of a Gaussian function of retinal location and a sigmoid function of eye position, called a gain field. A large set of such functions forms a basis set that can be used to perform an arbitrary motor response through a direct projection. We compare this hypothesis with other approaches that are commonly used to model population codes, such as computational maps and vectorial representations. Neither of these alternatives can fully account for the responses of parietal neurons, and they are computationally less efficient for nonlinear transformations. Basis functions also have the advantage of not depending on any coordinate system or reference frame. As a consequence, the position of an object can be represented in multiple reference frames simultaneously, a property consistent with the behavior of hemineglect patients with lesions in the parietal cortex.


2018 ◽  
Author(s):  
Richard D. Lange ◽  
Ankani Chattoraj ◽  
Jeffrey M. Beck ◽  
Jacob L. Yates ◽  
Ralf M. Haefner

AbstractHuman decisions are known to be systematically biased. A prominent example of such a bias occurs when integrating a sequence of sensory evidence over time. Previous empirical studies differ in the nature of the bias they observe, ranging from favoring early evidence (primacy), to favoring late evidence (recency). Here, we present a unifying framework that explains these biases and makes novel psychophysical and neurophysiological predictions. By explicitly modeling both the approximate and the hierarchical nature of inference in the brain, we show that temporal biases depend on the balance between “sensory information” and “category information” in the stimulus. Finally, we present new data from a human psychophysics task that confirms a critical prediction of our framework showing that effective temporal integration strategies can be robustly changed within each subject, and that allows us to exclude alternate explanations through quantitative model comparison.


2018 ◽  
Author(s):  
Gareth Harris ◽  
Taihong Wu ◽  
Gaia Linfield ◽  
Myung-Kyu Choi ◽  
He Liu ◽  
...  

AbstractIn the natural environment, animals often encounter multiple sensory cues that are simultaneously present. The nervous system integrates the relevant sensory information to generate behavioral responses that have adaptive values. However, the signal transduction pathways and the molecules that regulate integrated behavioral response to multiple sensory cues are not well defined. Here, we characterize a collective modulatory basis for a behavioral decision in C. elegans when the animal is presented with an attractive food source together with a repulsive odorant. We show that distributed neuronal components in the worm nervous system and several neuromodulators orchestrate the decision-making process, suggesting that various states and contexts may modulate the multisensory integration. Among these modulators, we identify a new function of a conserved TGF-β pathway that regulates the integrated decision by inhibiting the signaling from a set of central neurons. Interestingly, we find that a common set of modulators, including the TGF-β pathway, regulate the integrated response to the pairing of different foods and repellents. Together, our results provide insights into the modulatory signals regulating multisensory integration and reveal potential mechanistic basis for the complex pathology underlying defects in multisensory processing shared by common neurological diseases.Author SummaryThe present study characterizes the modulation of a behavioral decision in C. elegans when the worm is presented with a food lawn that is paired with a repulsive smell. We show that multiple sensory neurons and interneurons play roles in making the decision. We also identify several modulatory molecules that are essential for the integrated decision when the animal faces a choice between the cues of opposing valence. We further show that many of these factors, which often represent different states and contexts, are common for behavioral decisions that integrate sensory information from different types of foods and repellents. Overall, our results reveal a collective molecular and cellular basis for integration of simultaneously present attractive and repulsive cues to fine-tune decision-making.


Sign in / Sign up

Export Citation Format

Share Document