scholarly journals Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons

2013 ◽  
Vol 14 (6) ◽  
pp. 429-442 ◽  
Author(s):  
Christopher R. Fetsch ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki
Keyword(s):  
Neuroscience ◽  
2019 ◽  
Vol 408 ◽  
pp. 378-387 ◽  
Author(s):  
Qadeer Arshad ◽  
Marta Casanovas Ortega ◽  
Usman Goga ◽  
Rhannon Lobo ◽  
Shuaib Siddiqui ◽  
...  

Perception ◽  
2013 ◽  
Vol 42 (4) ◽  
pp. 477-479
Author(s):  
Paul Barry Hibbard
Keyword(s):  

Author(s):  
Daniel S. Gareau ◽  
Charles Vrattos ◽  
James Browning ◽  
Samantha R. Lish ◽  
Benjamin Firester ◽  
...  

2014 ◽  
Vol 369 (1635) ◽  
pp. 20120512 ◽  
Author(s):  
Rebecca Knight ◽  
Caitlin E. Piette ◽  
Hector Page ◽  
Daniel Walters ◽  
Elizabeth Marozzi ◽  
...  

How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.


2018 ◽  
Vol 31 (7) ◽  
pp. 645-674 ◽  
Author(s):  
Maria Gallagher ◽  
Elisa Raffaella Ferrè

Abstract In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user’s VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body’s orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion.


2016 ◽  
Vol 29 (1-3) ◽  
pp. 7-28 ◽  
Author(s):  
Cesare V. Parise

Crossmodal correspondences refer to the systematic associations often found across seemingly unrelated sensory features from different sensory modalities. Such phenomena constitute a universal trait of multisensory perception even in non-human species, and seem to result, at least in part, from the adaptation of sensory systems to natural scene statistics. Despite recent developments in the study of crossmodal correspondences, there are still a number of standing questions about their definition, their origins, their plasticity, and their underlying computational mechanisms. In this paper, I will review such questions in the light of current research on sensory cue integration, where crossmodal correspondences can be conceptualized in terms of natural mappings across different sensory cues that are present in the environment and learnt by the sensory systems. Finally, I will provide some practical guidelines for the design of experiments that might shed new light on crossmodal correspondences.


2020 ◽  
Author(s):  
Peter Scarfe

AbstractSensory cue integration is one of the primary areas in which a normative mathematical framework has been used to (1) define the “optimal” way in which to make decisions based upon ambiguous sensory information and (2) compare these predictions to an organism’s behaviour. The conclusion from such studies is that sensory cues are integrated in a statistically optimal fashion. Problematically, numerous alternative computational frameworks exist by which sensory cues could be integrated, many of which could be described as “optimal” base on different optimising criteria. Existing studies rarely assess the evidence relative to different candidate models, resulting in an inability to conclude that sensory cues are integrated according to the experimenters preferred framework. The aims of the present paper are to summarise and highlight the implicit assumptions rarely acknowledged in testing models of sensory cue integration, as well as to introduce an unbiased and principled method by which to distinguish the probability with which experimental data is consistent with a set of candidate models.


Sign in / Sign up

Export Citation Format

Share Document