scholarly journals A neurally plausible model for online recognition and postdiction in a dynamical environment

2019 ◽  
Author(s):  
Li Kevin Wenliang ◽  
Maneesh Sahani

AbstractHumans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts, which are informed both by sensory evidence and by prior experience about the causal structure of the environment. It is hypothesized that the brain establishes these structures using an internal model of how the observed patterns can be generated from relevant but unobserved causes. In dynamic environments, such integration often takes the form of postdiction, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past. We show that, as in other uses of the DDC, an inferential model can be learned efficiently using samples from an internal model of the world. Applied to stimuli used in the context of psychophysics experiments, the framework provides an online and plausible mechanism for inference, including postdictive effects.

2018 ◽  
Author(s):  
Ilker Yildirim ◽  
Mario Belledonne ◽  
Winrich Freiwald ◽  
Joshua Tenenbaum

Vision must not only recognize and localize objects, but perform richer inferences about the underlying causes in the world that give rise to sensory data. How the brain performs these inferences remains unknown: Theoretical proposals based on inverting generative models (or “analysis-by-synthesis”) have a long history but their mechanistic implementations have typically been too slow to support online perception, and their mapping to neural circuits is unclear. Here we present a neurally plausible model for efficiently inverting generative models of images and test it as an account of one high-level visual capacity, the perception of faces. The model is based on a deep neural network that learns to invert a three-dimensional (3D) face graphics program in a single fast feedforward pass. It explains both human behavioral data and multiple levels of neural processing in non-human primates, as well as a classic illusion, the “hollow face” effect. The model fits qualitatively better than state-of-the-art computer vision models, and suggests an interpretable reverse-engineering account of how images are transformed into percepts in the ventral stream.


2009 ◽  
Vol 364 (1521) ◽  
pp. 1211-1221 ◽  
Author(s):  
Karl Friston ◽  
Stefan Kiebel

This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs.


Physiology ◽  
2001 ◽  
Vol 16 (5) ◽  
pp. 234-238 ◽  
Author(s):  
Bernhard J. M. Hess

The central vestibular system receives afferent information about head position as well as rotation and translation. This information is used to prevent blurring of the retinal image but also to control self-orientation and motion in space. Vestibular signal processing in the brain stem appears to be linked to an internal model of head motion in space.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


Author(s):  
Zahra Mousavi ◽  
Mohammad Mahdi Kiani ◽  
Hamid Aghajan

AbstractThe brain is constantly anticipating the future of sensory inputs based on past experiences. When new sensory data is different from predictions shaped by recent trends, neural signals are generated to report this surprise. Existing models for quantifying surprise are based on an ideal observer assumption operating under one of the three definitions of surprise set forth as the Shannon, Bayesian, and Confidence-corrected surprise. In this paper, we analyze both visual and auditory EEG and auditory MEG signals recorded during oddball tasks to examine which temporal components in these signals are sufficient to decode the brain’s surprise based on each of these three definitions. We found that for both recording systems the Shannon surprise is always significantly better decoded than the Bayesian surprise regardless of the sensory modality and the selected temporal features used for decoding.Author summaryA regression model is proposed for decoding the level of the brain’s surprise in response to sensory sequences using selected temporal components of recorded EEG and MEG data. Three surprise quantification definitions (Shannon, Bayesian, and Confidence-corrected surprise) are compared in offering decoding power. Four different regimes for selecting temporal samples of EEG and MEG data are used to evaluate which part of the recorded data may contain signatures that represent the brain’s surprise in terms of offering a high decoding power. We found that both the middle and late components of the EEG response offer strong decoding power for surprise while the early components are significantly weaker in decoding surprise. In the MEG response, we found that the middle components have the highest decoding power while the late components offer moderate decoding powers. When using a single temporal sample for decoding surprise, samples of the middle segment possess the highest decoding power. Shannon surprise is always better decoded than the other definitions of surprise for all the four temporal feature selection regimes. Similar superiority for Shannon surprise is observed for the EEG and MEG data across the entire range of temporal sample regimes used in our analysis.


2014 ◽  
Vol 281 (1781) ◽  
pp. 20132630 ◽  
Author(s):  
Mugdha Deshpande ◽  
Fakhriddin Pirlepesov ◽  
Thierry Lints

As in human infant speech development, vocal imitation in songbirds involves sensory acquisition and memorization of adult-produced vocal signals, followed by a protracted phase of vocal motor practice. The internal model of adult tutor song in the juvenile male brain, termed ‘the template’, is central to the vocal imitation process. However, even the most fundamental aspects of the template, such as when, where and how it is encoded in the brain, remain poorly understood. A major impediment to progress is that current studies of songbird vocal learning use protracted tutoring over days, weeks or months, complicating dissection of the template encoding process. Here, we take the key step of tightly constraining the timing of template acquisition. We show that, in the zebra finch, template encoding can be time locked to, on average, a 2 h period of juvenile life and based on just 75 s of cumulative tutor song exposure. Crucially, we find that vocal changes occurring on the day of training correlate with eventual imitative success. This paradigm will lead to insights on how the template is instantiated in the songbird brain, with general implications for deciphering how internal models are formed to guide learning of complex social behaviours.


2021 ◽  
pp. 1-46
Author(s):  
João Angelo Ferres Brogin ◽  
Jean Faber ◽  
Douglas Domingues Bueno

Abstract Epilepsy is one of the most common brain disorders worldwide, affecting millions of people every year. Although significant effort has been put into better understanding it and mitigating its effects, the conventional treatments are not fully effective. Advances in computational neuroscience, using mathematical dynamic models that represent brain activities at different scales, have enabled addressing epilepsy from a more theoretical standpoint. In particular, the recently proposed Epileptor model stands out among these models, because it represents well the main features of seizures, and the results from its simulations have been consistent with experimental observations. In addition, there has been an increasing interest in designing control techniques for Epileptor that might lead to possible realistic feedback controllers in the future. However, such approaches rely on knowing all of the states of the model, which is not the case in practice. The work explored in this letter aims to develop a state observer to estimate Epileptor's unmeasurable variables, as well as reconstruct the respective so-called bursters. Furthermore, an alternative modeling is presented for enhancing the convergence speed of an observer. The results show that the proposed approach is efficient under two main conditions: when the brain is undergoing a seizure and when a transition from the healthy to the epileptiform activity occurs.


Author(s):  
Christof Koch

The brain computes! This is accepted as a truism by the majority of neuroscientists engaged in discovering the principles employed in the design and operation of nervous systems. What is meant here is that any brain takes the incoming sensory data, encodes them into various biophysical variables, such as the membrane potential or neuronal firing rates, and subsequently performs a very large number of ill-specified operations, frequently termed computations, on these variables to extract relevant features from the input. The outcome of some of these computations can be stored for later access and will, ultimately, control the motor output of the animal in appropriate ways. The present book is dedicated to understanding in detail the biophysical mechanisms responsible for these computations. Its scope is the type of information processing underlying perception and motor control, occurring at the millisecond to fraction of a second time scale. When you look at a pair of stereo images trying to fuse them into a binocular percept, your brain is busily computing away trying to find the “best” solution. What are the computational primitives at the neuronal and subneuronal levels underlying this impressive performance, unmatched by any machine? Naively put and using the language of the electronic circuit designer, the book asks: “What are the diodes and the transistors of the brain?” and “What sort of operations do these elementary circuit elements implement?” Contrary to received opinion, nerve cells are considerably more complex than suggested by work in the neural network community. Like morons, they are reduced to computing nothing but a thresholded sum of their inputs. We know, for instance, that individual nerve cells in the locust perform an operation akin to a multiplication. Given synapses, ionic channels, and membranes, how is this actually carried out? How do neurons integrate, delay, or change their output gain? What are the relevant variables that carry information? The membrane potential? The concentration of intracellular Ca2+ ions? What is their temporal resolution? And how large is the variability of these signals that determines how accurately they can encode information? And what variables are used to store the intermediate results of these computations? And where does long-term memory reside? Natural philosophers and scientists in the western world have always compared the brain to the most advanced technology of the day.


Author(s):  
Romain Brette

Abstract “Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the neural code depends on experimental details that are not carried by the coding variable (e.g., the spike count). Thus, the representational power of neural codes is much more limited than generally implied. Second, neural codes carry information only by reference to things with known meaning. In contrast, perceptual systems must build information from relations between sensory signals and actions, forming an internal model. Neural codes are inadequate for this purpose because they are unstructured and therefore unable to represent relations. Third, coding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system. The coding metaphor tries to fit the dynamic, circular, and distributed causal structure of the brain into a linear chain of transformations between observables, but the two causal structures are incongruent. I conclude that the neural coding metaphor cannot provide a valid basis for theories of brain function, because it is incompatible with both the causal structure of the brain and the representational requirements of cognition.


2003 ◽  
Vol 90 (6) ◽  
pp. 4016-4021 ◽  
Author(s):  
Thrishantha Nanayakkara ◽  
Reza Shadmehr

The delays in sensorimotor pathways pose a formidable challenge to the implementation of stable error feedback control, and yet the intact brain has little trouble maintaining limb stability. How is this achieved? One idea is that feedback control depends not only on delayed proprioceptive feedback but also on internal models of limb dynamics. In theory, an internal model allows the brain to predict limb position. Earlier we had found that during reaching, the brain estimates hand position in real-time in a coordinate system that can be used for generating saccades. Here we tested the idea that the estimate of hand position, as expressed through saccades, depends on an internal model that adapts to dynamics of the arm. We focused on the behavior of the eyes as perturbations were applied to the unseen hand. We found that when the hand was perturbed from stable posture with a 100-ms force pulse of random direction and magnitude, a saccade was generated on average at 182 ms postpulse onset to a position that was an unbiased estimate of real-time hand position. To test whether planning of saccades depended on an internal model of arm dynamics, arm dynamics were altered either predictably or unpredictably during the postpulse period. When arm dynamics were predictable, saccade amplitudes changed to reflect the change in the arm's behavior. We suggest that proprioceptive feedback from the arm is integrated into an adaptable internal model that computes an estimate of current hand position in eye-centered coordinates.


Sign in / Sign up

Export Citation Format

Share Document