scholarly journals To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian Causal Inference

2018 ◽  
Author(s):  
Máté Aller ◽  
Uta Noppeney

AbstractTo form a percept of the environment, the brain needs to solve the binding problem – inferring whether signals come from a common cause and be integrated, or come from independent causes and be segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian Causal Inference; but, the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localization task, we show that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task-relevance into spatial priority maps that guide behavioural responses. Critically, as predicted by Bayesian Causal Inference, these spatial priority maps take into account the brain’s uncertainty about the world’s causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian Causal Inference, a statistical computation, crucial for effective interactions with the environment.

PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2021 ◽  
Vol 118 (32) ◽  
pp. e2106235118
Author(s):  
Reuben Rideaux ◽  
Katherine R. Storrs ◽  
Guido Maiello ◽  
Andrew E. Welchman

Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.


Neuroforum ◽  
2018 ◽  
Vol 24 (4) ◽  
pp. A169-A181
Author(s):  
Uta Noppeney ◽  
Samuel A. Jones ◽  
Tim Rohe ◽  
Ambra Ferrari

Abstarct Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.


2018 ◽  
Author(s):  
Tim Rohe ◽  
Ann-Christine Ehlis ◽  
Uta Noppeney

AbstractTransforming the barrage of sensory signals into a coherent multisensory percept relies on solving the binding problem – deciding whether signals come from a common cause and should be integrated, or instead be segregated. Human observers typically arbitrate between integration and segregation consistent with Bayesian Causal Inference, but the neural mechanisms remain poorly understood. We presented observers with audiovisual sequences that varied in the number of flashes and beeps. Combining Bayesian modelling and EEG representational similarity analyses, we show that the brain initially represents the number of flashes and beeps and their numeric disparity mainly independently. Later, it computes them by averaging the forced-fusion and segregation estimates weighted by the probabilities of common and independent cause models (i.e. model averaging). Crucially, prestimulus oscillatory alpha power and phase correlate with observers’ prior beliefs about the world’s causal structure that guide their arbitration between sensory integration and segregation.


2019 ◽  
Vol 121 (5) ◽  
pp. 1588-1590 ◽  
Author(s):  
Luca Casartelli

Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.


2015 ◽  
Vol 370 (1668) ◽  
pp. 20140170 ◽  
Author(s):  
Riitta Hari ◽  
Lauri Parkkonen

We discuss the importance of timing in brain function: how temporal dynamics of the world has left its traces in the brain during evolution and how we can monitor the dynamics of the human brain with non-invasive measurements. Accurate timing is important for the interplay of neurons, neuronal circuitries, brain areas and human individuals. In the human brain, multiple temporal integration windows are hierarchically organized, with temporal scales ranging from microseconds to tens and hundreds of milliseconds for perceptual, motor and cognitive functions, and up to minutes, hours and even months for hormonal and mood changes. Accurate timing is impaired in several brain diseases. From the current repertoire of non-invasive brain imaging methods, only magnetoencephalography (MEG) and scalp electroencephalography (EEG) provide millisecond time-resolution; our focus in this paper is on MEG. Since the introduction of high-density whole-scalp MEG/EEG coverage in the 1990s, the instrumentation has not changed drastically; yet, novel data analyses are advancing the field rapidly by shifting the focus from the mere pinpointing of activity hotspots to seeking stimulus- or task-specific information and to characterizing functional networks. During the next decades, we can expect increased spatial resolution and accuracy of the time-resolved brain imaging and better understanding of brain function, especially its temporal constraints, with the development of novel instrumentation and finer-grained, physiologically inspired generative models of local and network activity. Merging both spatial and temporal information with increasing accuracy and carrying out recordings in naturalistic conditions, including social interaction, will bring much new information about human brain function.


2021 ◽  
Author(s):  
James M. Hill ◽  
Christian Clement ◽  
L. Arceneaux ◽  
Walter Lukiw

Abstract Background: Multiple lines of evidence currently indicate that the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)gains entry into human host cells via a high-affinity interaction with the angiotensin-converting enzyme 2 (ACE2) transmembrane receptor. Research has further shown the widespread expression of the ACE2 receptor on the surface of many different immune, non-immune and neural host cell types, and that SARS-CoV-2 has there markable capability to attack many different types of human-host cells simultaneously. One principal neuroanatomical region for highACE2 expression patterns occurs in the brainstem, an area of the brain containing regulatory centers for respiration, and this may in part explain the predisposition of many COVID-19 patients to respiratory distress. Early studies also indicated extensive ACE2 expression in the whole eye and the brain’s visual circuitry. In this study we analyzed ACE2 receptor expression at the mRNA and protein level in multiple cell types involved in human vision, including cell types of the external eye and several deep brain regions known to be involved in the processing of visual signals.Methods: ACE2 mRNA and protein analysis; multiple eye and brain cells and tissues; gamma32P-adenosine tri-phosphate ([γ-32P]dATP) radiolabeled probes; Northern analysis; ELISA.Results: The four main findings were: (i)that many different optical and neural cell types of the human visual system provide receptors essential for SARS-CoV-2 invasion; (ii)the remarkable ubiquity of ACE2 presence in cells of the eye and anatomical regions of the brain involved in visual signal processing; (iii)that ACE2 receptor expression in different ocular cell types and visual processing centers of the brain provide multiple compartments for SARS-CoV-2 infiltration; and (iv)a gradient of increasing ACE2 expression from the anterior surface of the eye to the visual signal processing areas of the occipital lobe and the primary visual neocortex.Conclusion: A gradient of ACE2 expression from the eye surface to the occipital lobe provide the SARS-CoV-2 virus a novel pathway from the outer eye into deeper anatomical regions of the brain involved in vision. These findings may explain, in part, the many recently reported neuro-ophthalmic manifestations of SARS-CoV-2infection in COVID-19 affected patients.


2019 ◽  
Author(s):  
Ulrik Beierholm ◽  
Tim Rohe ◽  
Ambra Ferrari ◽  
Oliver Stegle ◽  
Uta Noppeney

AbstractTo form the most reliable percept of the environment, the brain needs to represent sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus.In a series of psychophysics experiments human observers localized auditory signals that were presented in synchrony with spatially disparate visual signals. Critically, the visual noise changed dynamically over time with or without intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory reliability estimates that combine information from past and current signals as predicted by an optimal Bayesian learner or approximate strategies of exponential discountingOur results challenge classical models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.


2018 ◽  
Author(s):  
D.H. Baker ◽  
G. Vilidaite ◽  
E. McClarnon ◽  
E. Valkova ◽  
A. Bruno ◽  
...  

AbstractThe brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.


2017 ◽  
Vol 1 (2) ◽  
pp. 69-99 ◽  
Author(s):  
William Hedley Thompson ◽  
Per Brantefors ◽  
Peter Fransson

Network neuroscience has become an established paradigm to tackle questions related to the functional and structural connectome of the brain. Recently, interest has been growing in examining the temporal dynamics of the brain’s network activity. Although different approaches to capturing fluctuations in brain connectivity have been proposed, there have been few attempts to quantify these fluctuations using temporal network theory. This theory is an extension of network theory that has been successfully applied to the modeling of dynamic processes in economics, social sciences, and engineering article but it has not been adopted to a great extent within network neuroscience. The objective of this article is twofold: (i) to present a detailed description of the central tenets of temporal network theory and describe its measures, and; (ii) to apply these measures to a resting-state fMRI dataset to illustrate their utility. Furthermore, we discuss the interpretation of temporal network theory in the context of the dynamic functional brain connectome. All the temporal network measures and plotting functions described in this article are freely available as the Python package Teneto.


Sign in / Sign up

Export Citation Format

Share Document