scholarly journals Exploring the time window for causal inference and the multisensory integration of actions and their visual effects

2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.

2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.


2017 ◽  
Vol 117 (4) ◽  
pp. 1569-1580 ◽  
Author(s):  
Nienke B. Debats ◽  
Marc O. Ernst ◽  
Herbert Heuer

Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2. The biased position judgments’ variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.


Author(s):  
Kathleen E. Cullen

As we go about our everyday activities, our brain computes accurate estimates of both our motion relative to the world, and of our orientation relative to gravity. Essential to this computation is the information provided by the vestibular system; it detects the rotational velocity and linear acceleration of our heads relative to space, making a fundamental contribution to our perception of self-motion and spatial orientation. Additionally, in everyday life, our perception of self-motion depends on the integration of both vestibular and nonvestibular cues, including visual and proprioceptive information. Furthermore, the integration of motor-related information is also required for perceptual stability, so that the brain can distinguish whether the experienced sensory inflow was a result of active self-motion through the world or if instead self-motion that was externally generated. To date, understanding how the brain encodes and integrates sensory cues with motor signals for the perception of self-motion during natural behaviors remains a major goal in neuroscience. Recent experiments have (i) provided new insights into the neural code used to represent sensory information in vestibular pathways, (ii) established that vestibular pathways are inherently multimodal at the earliest stages of processing, and (iii) revealed that self-motion information processing is adjusted to meet the needs of specific tasks. Our current level of understanding of how the brain integrates sensory information and motor-related signals to encode self-motion and ensure perceptual stability during everyday activities is reviewed.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


2012 ◽  
Vol 108 (11) ◽  
pp. 2912-2930 ◽  
Author(s):  
David Thura ◽  
Julie Beauregard-Racine ◽  
Charles-William Fradet ◽  
Paul Cisek

It is often suggested that decisions are made when accumulated sensory information reaches a fixed accuracy criterion. This is supported by many studies showing a gradual build up of neural activity to a threshold. However, the proposal that this build up is caused by sensory accumulation is challenged by findings that decisions are based on information from a time window much shorter than the build-up process. Here, we propose that in natural conditions where the environment can suddenly change, the policy that maximizes reward rate is to estimate evidence by accumulating only novel information and then compare the result to a decreasing accuracy criterion. We suggest that the brain approximates this policy by multiplying an estimate of sensory evidence with a motor-related urgency signal and that the latter is primarily responsible for neural activity build up. We support this hypothesis using human behavioral data from a modified random-dot motion task in which motion coherence changes during each trial.


Author(s):  
Min Guo ◽  
Yinghua Yu ◽  
Jiajia Yang ◽  
Jinglong Wu

To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.


2017 ◽  
Vol 118 (3) ◽  
pp. 1598-1608 ◽  
Author(s):  
Léo Arnoux ◽  
Sebastien Fromentin ◽  
Dario Farotto ◽  
Mathieu Beraneck ◽  
Joseph McIntyre ◽  
...  

To perform goal-oriented hand movement, humans combine multiple sensory signals (e.g., vision and proprioception) that can be encoded in various reference frames (body centered and/or exo-centered). In a previous study (Tagliabue M, McIntyre J. PLoS One 8: e68438, 2013), we showed that, when aligning a hand to a remembered target orientation, the brain encodes both target and response in visual space when the target is sensed by one hand and the response is performed by the other, even though both are sensed only through proprioception. Here we ask whether such visual encoding is due 1) to the necessity of transferring sensory information across the brain hemispheres, or 2) to the necessity, due to the arms’ anatomical mirror symmetry, of transforming the joint signals of one limb into the reference frame of the other. To answer this question, we asked subjects to perform purely proprioceptive tasks in different conditions: Intra, the same arm sensing the target and performing the movement; Inter/Parallel, one arm sensing the target and the other reproducing its orientation; and Inter/Mirror, one arm sensing the target and the other mirroring its orientation. Performance was very similar between Intra and Inter/Mirror (conditions not requiring joint-signal transformations), while both differed from Inter/Parallel. Manipulation of the visual scene in a virtual reality paradigm showed visual encoding of proprioceptive information only in the latter condition. These results suggest that the visual encoding of purely proprioceptive tasks is not due to interhemispheric transfer of the proprioceptive information per se, but to the necessity of transforming joint signals between mirror-symmetric limbs. NEW & NOTEWORTHY Why does the brain encode goal-oriented, intermanual tasks in a visual space, even in the absence of visual feedback about the target and the hand? We show that the visual encoding is not due to the transfer of proprioceptive signals between brain hemispheres per se, but to the need, due to the mirror symmetry of the two limbs, of transforming joint angle signals of one arm in different joint signals of the other.


2020 ◽  
Vol 117 (13) ◽  
pp. 7510-7515 ◽  
Author(s):  
Tessel Blom ◽  
Daniel Feuerriegel ◽  
Philippa Johnson ◽  
Stefan Bode ◽  
Hinze Hogendoorn

The transmission of sensory information through the visual system takes time. As a result of these delays, the visual information available to the brain always lags behind the timing of events in the present moment. Compensating for these delays is crucial for functioning within dynamic environments, since interacting with a moving object (e.g., catching a ball) requires real-time localization of the object. One way the brain might achieve this is via prediction of anticipated events. Using time-resolved decoding of electroencephalographic (EEG) data, we demonstrate that the visual system represents the anticipated future position of a moving object, showing that predictive mechanisms activate the same neural representations as afferent sensory input. Importantly, this activation is evident before sensory input corresponding to the stimulus position is able to arrive. Finally, we demonstrate that, when predicted events do not eventuate, sensory information arrives too late to prevent the visual system from representing what was expected but never presented. Taken together, we demonstrate how the visual system can implement predictive mechanisms to preactivate sensory representations, and argue that this might allow it to compensate for its own temporal constraints, allowing us to interact with dynamic visual environments in real time.


Sign in / Sign up

Export Citation Format

Share Document