scholarly journals fMRI correlates of visual cue combination

2010 ◽  
Vol 3 (9) ◽  
pp. 850-850
Author(s):  
A. E Welchman ◽  
A. Deubelius ◽  
S. J Maier ◽  
H. H Bulthoff ◽  
Z. Kourtzi
Keyword(s):  
2021 ◽  
pp. 1-19
Author(s):  
Sophie Rohlf ◽  
Patrick Bruns ◽  
Brigitte Röder

Abstract Reliability-based cue combination is a hallmark of multisensory integration, while the role of cue reliability for crossmodal recalibration is less understood. The present study investigated whether visual cue reliability affects audiovisual recalibration in adults and children. Participants had to localize sounds, which were presented either alone or in combination with a spatially discrepant high- or low-reliability visual stimulus. In a previous study we had shown that the ventriloquist effect (indicating multisensory integration) was overall larger in the children groups and that the shift in sound localization toward the spatially discrepant visual stimulus decreased with visual cue reliability in all groups. The present study replicated the onset of the immediate ventriloquist aftereffect (a shift in unimodal sound localization following a single exposure of a spatially discrepant audiovisual stimulus) at the age of 6–7 years. In adults the immediate ventriloquist aftereffect depended on visual cue reliability, whereas the cumulative ventriloquist aftereffect (reflecting the audiovisual spatial discrepancies over the complete experiment) did not. In 6–7-year-olds the immediate ventriloquist aftereffect was independent of visual cue reliability. The present results are compatible with the idea of immediate and cumulative crossmodal recalibrations being dissociable processes and that the immediate ventriloquist aftereffect is more closely related to genuine multisensory integration.


2017 ◽  
Author(s):  
James Negen ◽  
Lisa Wen ◽  
Lore Thaler ◽  
Marko Nardini

ABSTRACTHumans are effective at dealing with noisy, probabilistic information in familiar settings. One hallmark of this is Bayesian Cue Combination: combining multiple noisy estimates to increase precision beyond the best single estimate, taking into account their reliabilities. Here we show that adults also combine a novel audio cue to distance, akin to human echolocation, with a visual cue. Following two hours of training, subjects were more precise given both cues together versus the best single cue. This persisted when we changed the novel cue’s auditory frequency. Reliability changes also led to a re-weighting of cues without feedback, showing that they learned something more flexible than a rote decision rule for specific stimuli. The main findings replicated with a vibrotactile cue. These results show that the mature sensory apparatus can learn to flexibly integrate new sensory skills. The findings are unexpected considering previous empirical results and current models of multisensory learning.


2004 ◽  
Vol 4 (8) ◽  
pp. 699-699 ◽  
Author(s):  
A. J. Ecker ◽  
L. M. Heller

2020 ◽  
Vol 3 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Christopher W. Tyler

Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.


2021 ◽  
pp. 1-15
Author(s):  
Kim McDonough ◽  
Rachael Lindberg ◽  
Pavel Trofimovich ◽  
Oguzhan Tekin

Abstract This replication study seeks to extend the generalizability of an exploratory study (McDonough et al., 2019) that identified holds (i.e., temporary cessation of dynamic movement by the listener) as a reliable visual cue of non-understanding. Conversations between second language (L2) English speakers in the Corpus of English as a Lingua Franca Interaction (CELFI; McDonough & Trofimovich, 2019) with non-understanding episodes (e.g., pardon?, what?, sorry?) were sampled and compared with understanding episodes (i.e., follow-up questions). External raters (N = 90) assessed the listener's comprehension under three rating conditions: +face/+voice, −face/+voice, and +face/−voice. The association between non-understanding and holds in McDonough et al. (2019) was confirmed. Although raters distinguished reliably between understanding and non-understanding episodes, they were not sensitive to facial expressions when judging listener comprehension. The initial and replication findings suggest that holds remain a promising visual signature of non-understanding that can be explored in future theoretically- and pedagogically-oriented contexts.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Brittany C. Clawson ◽  
Emily J. Pickup ◽  
Amy Ensing ◽  
Laura Geneseo ◽  
James Shaver ◽  
...  

AbstractLearning-activated engram neurons play a critical role in memory recall. An untested hypothesis is that these same neurons play an instructive role in offline memory consolidation. Here we show that a visually-cued fear memory is consolidated during post-conditioning sleep in mice. We then use TRAP (targeted recombination in active populations) to genetically label or optogenetically manipulate primary visual cortex (V1) neurons responsive to the visual cue. Following fear conditioning, mice respond to activation of this visual engram population in a manner similar to visual presentation of fear cues. Cue-responsive neurons are selectively reactivated in V1 during post-conditioning sleep. Mimicking visual engram reactivation optogenetically leads to increased representation of the visual cue in V1. Optogenetic inhibition of the engram population during post-conditioning sleep disrupts consolidation of fear memory. We conclude that selective sleep-associated reactivation of learning-activated sensory populations serves as a necessary instructive mechanism for memory consolidation.


Sign in / Sign up

Export Citation Format

Share Document