scholarly journals Using the past to estimate sensory uncertainty

2019 ◽  
Author(s):  
Ulrik Beierholm ◽  
Tim Rohe ◽  
Ambra Ferrari ◽  
Oliver Stegle ◽  
Uta Noppeney

AbstractTo form the most reliable percept of the environment, the brain needs to represent sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus.In a series of psychophysics experiments human observers localized auditory signals that were presented in synchrony with spatially disparate visual signals. Critically, the visual noise changed dynamically over time with or without intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory reliability estimates that combine information from past and current signals as predicted by an optimal Bayesian learner or approximate strategies of exponential discountingOur results challenge classical models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Ulrik Beierholm ◽  
Tim Rohe ◽  
Ambra Ferrari ◽  
Oliver Stegle ◽  
Uta Noppeney

To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.


2012 ◽  
Vol 25 (0) ◽  
pp. 26-27 ◽  
Author(s):  
Verena Conrad ◽  
Marco Pino Vitello ◽  
Uta Noppeney

Introduction: In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. Recent studies have demonstrated crossmodal interactions between ambiguous and unambiguous signals. However it is currently unknown whether multiple bistable processes can interact across the senses (Conrad et al., 2010; Pressnitzer and Hupe, 2006). Using the apparent motion quartet in vision and touch, this study investigated whether bistable perceptual processes for vision and touch are independent or influence each other when powerful cues of congruency are provided to facilitate visuotactile integration (Conrad et al., in press). Methods: When two visual flashes and/or tactile vibration pulses are presented alternately along the two diagonals of the rectangle, subjects’ percept vacillates between vertical and horizontal apparent motion in the visual and/or tactile modalities (Carter et al., 2008). Observers were presented with unisensory (visual/tactile), visuotactile spatially congruent and incongruent apparent motion quartets and reported their visual or tactile percepts. Results: Congruent stimulation induced pronounced visuotactile interactions as indicated by increased dominance times and %-bias for the percept already dominant under unisensory stimulation. Yet, the temporal dynamics did not converge for congruent stimulation. It depended also on subjects’ attentional focus and was generally slower for tactile than visual reports. Conclusion: Our results support Bayesian approaches to perceptual inference, where the probability of a perceptual interpretation is determined by combining a modality-specific prior with incoming visual and/or tactile evidence. Under congruent stimulation, joint evidence from both senses decelerates the rivalry dynamics by stabilizing the more likely perceptual interpretation. Importantly, the perceptual stabilization was specific to spatiotemporally congruent visuotactile stimulation indicating multisensory rather than cognitive bias mechanisms.


2020 ◽  
Author(s):  
Giada Lettieri ◽  
Giacomo Handjaras ◽  
Emiliano Ricciardi ◽  
Pietro Pietrini ◽  
Luca Cecchetti

AbstractThe stream of affect is the result of a constant interaction between past experiences, motivations, expectations and the unfolding of events. How the brain represents the relationship between time and affect has been hardly explored, as it requires modeling the complexity of everyday life in the laboratory. Movies condense into hours a multitude of emotional responses, synchronized across subjects and characterized by temporal dynamics alike real-world experiences.Here, using naturalistic stimulation, time-varying intersubject brain connectivity and behavioral reports, we demonstrate that connectivity strength of large-scale brain networks tracks changes in affect. The default mode network represents the pleasantness of the experience, whereas attention and control networks encode its intensity. Interestingly, these orthogonal descriptions of affect converge in right temporoparietal and fronto-polar cortex. Within these regions, the stream of affect is represented at multiple timescales by chronotopic maps, where connectivity of adjacent areas preferentially maps experiences in 3- to 11-minute segments.


Author(s):  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Sophia M. Shatek ◽  
Jack Gerboni ◽  
Alex O. Holcombe ◽  
...  

AbstractHumans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain “fills-in” information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics (Kwon et al., 2015). In the present study, we used electroencephalography (EEG) and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus. All data and analysis code for this study are available at https://osf.io/8v47t/.


2019 ◽  
Vol 121 (5) ◽  
pp. 1588-1590 ◽  
Author(s):  
Luca Casartelli

Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.


2015 ◽  
Vol 370 (1668) ◽  
pp. 20140170 ◽  
Author(s):  
Riitta Hari ◽  
Lauri Parkkonen

We discuss the importance of timing in brain function: how temporal dynamics of the world has left its traces in the brain during evolution and how we can monitor the dynamics of the human brain with non-invasive measurements. Accurate timing is important for the interplay of neurons, neuronal circuitries, brain areas and human individuals. In the human brain, multiple temporal integration windows are hierarchically organized, with temporal scales ranging from microseconds to tens and hundreds of milliseconds for perceptual, motor and cognitive functions, and up to minutes, hours and even months for hormonal and mood changes. Accurate timing is impaired in several brain diseases. From the current repertoire of non-invasive brain imaging methods, only magnetoencephalography (MEG) and scalp electroencephalography (EEG) provide millisecond time-resolution; our focus in this paper is on MEG. Since the introduction of high-density whole-scalp MEG/EEG coverage in the 1990s, the instrumentation has not changed drastically; yet, novel data analyses are advancing the field rapidly by shifting the focus from the mere pinpointing of activity hotspots to seeking stimulus- or task-specific information and to characterizing functional networks. During the next decades, we can expect increased spatial resolution and accuracy of the time-resolved brain imaging and better understanding of brain function, especially its temporal constraints, with the development of novel instrumentation and finer-grained, physiologically inspired generative models of local and network activity. Merging both spatial and temporal information with increasing accuracy and carrying out recordings in naturalistic conditions, including social interaction, will bring much new information about human brain function.


2011 ◽  
Vol 106 (4) ◽  
pp. 1862-1874 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.


2019 ◽  
Author(s):  
Michael Elliott

The binding problem refers to the puzzle of how the brain combines objects’ properties such as motion, color, shape, location, sound, etc., from diverse regions of the brain and forms a unified subjective experience. Holographic physical systems, recently discovered darlings of theoretical physics, began with research into black holes but have since evolved into the study of condensed matter systems in the laboratory like superfluids and superconductors. A primary example is the AdS/CFT correspondence. A recent conjecture of this correspondence suggests that holographic systems combine information from across a boundary surface, sort out the simplest description of said information, and, in turn, use it to determine the geometry of spacetime itself in the interior - a kind of geometric hologram. Although we would never tend to think of these two processes as related, in this paper we point out ten similarities between the two and show that holographic systems are the only physical systems that match the subjective and computational characteristics of the binding problem.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


Author(s):  
Zahra Mousavi ◽  
Mohammad Mahdi Kiani ◽  
Hamid Aghajan

AbstractThe brain is constantly anticipating the future of sensory inputs based on past experiences. When new sensory data is different from predictions shaped by recent trends, neural signals are generated to report this surprise. Existing models for quantifying surprise are based on an ideal observer assumption operating under one of the three definitions of surprise set forth as the Shannon, Bayesian, and Confidence-corrected surprise. In this paper, we analyze both visual and auditory EEG and auditory MEG signals recorded during oddball tasks to examine which temporal components in these signals are sufficient to decode the brain’s surprise based on each of these three definitions. We found that for both recording systems the Shannon surprise is always significantly better decoded than the Bayesian surprise regardless of the sensory modality and the selected temporal features used for decoding.Author summaryA regression model is proposed for decoding the level of the brain’s surprise in response to sensory sequences using selected temporal components of recorded EEG and MEG data. Three surprise quantification definitions (Shannon, Bayesian, and Confidence-corrected surprise) are compared in offering decoding power. Four different regimes for selecting temporal samples of EEG and MEG data are used to evaluate which part of the recorded data may contain signatures that represent the brain’s surprise in terms of offering a high decoding power. We found that both the middle and late components of the EEG response offer strong decoding power for surprise while the early components are significantly weaker in decoding surprise. In the MEG response, we found that the middle components have the highest decoding power while the late components offer moderate decoding powers. When using a single temporal sample for decoding surprise, samples of the middle segment possess the highest decoding power. Shannon surprise is always better decoded than the other definitions of surprise for all the four temporal feature selection regimes. Similar superiority for Shannon surprise is observed for the EEG and MEG data across the entire range of temporal sample regimes used in our analysis.


Sign in / Sign up

Export Citation Format

Share Document