scholarly journals Imperfect Bayesian inference in visual perception

2018 ◽  
Author(s):  
Elina Stengård ◽  
Ronald van den Berg

AbstractOptimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance – measured as d’ – fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This “imperfect Bayesian” model convincingly outperformed the “flawless Bayesian” model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.Author summaryThe main task of perceptual systems is to make truthful inferences about the environment. The sensory input to these systems is often astonishingly imprecise, which makes human perception prone to error. Nevertheless, numerous studies have reported that humans often perform as accurately as is possible given these sensory imprecisions. This suggests that the brain makes optimal use of the sensory input and computes without error. The validity of this claim has recently been questioned for two reasons. First, it has been argued that a lot of the evidence for optimality comes from studies that used overly flexible models. Second, optimality in human perception is implausible due to limitations inherent to neural systems. In this study, we reconsider optimality in a standard visual perception task by devising a research method that addresses both concerns. In contrast to previous studies, we find clear indications of suboptimalities. Our data are best explained by a model that is based on the optimal decision strategy, but with imperfections in its execution.

PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2124 ◽  
Author(s):  
Megan A.K. Peters ◽  
Wei Ji Ma ◽  
Ladan Shams

When we lift two differently-sized but equally-weighted objects, we expect the larger to be heavier, but the smallerfeelsheavier. However, traditional Bayesian approaches with “larger is heavier” priors predict the smaller object should feellighter; this Size-Weight Illusion (SWI) has thus been labeled “anti-Bayesian” and has stymied psychologists for generations. We propose that previous Bayesian approaches neglect the brain’s inference process about density. In our Bayesian model, objects’ perceived heaviness relationship is based on both their size and inferred density relationship: observers evaluate competing, categorical hypotheses about objects’ relative densities, the inference about which is then used to produce the final estimate of weight. The model can qualitatively and quantitatively reproduce the SWI and explain other researchers’ findings, and also makes a novel prediction, which we confirmed. This same computational mechanism accounts for other multisensory phenomena and illusions; that the SWI follows the same process suggests that competitive-prior Bayesian inference can explain human perception across many domains.


2015 ◽  
Vol 114 (6) ◽  
pp. 3076-3096 ◽  
Author(s):  
Ryan M. Peters ◽  
Phillip Staibano ◽  
Daniel Goldreich

The ability to resolve the orientation of edges is crucial to daily tactile and sensorimotor function, yet the means by which edge perception occurs is not well understood. Primate cortical area 3b neurons have diverse receptive field (RF) spatial structures that may participate in edge orientation perception. We evaluated five candidate RF models for macaque area 3b neurons, previously recorded while an oriented bar contacted the monkey's fingertip. We used a Bayesian classifier to assign each neuron a best-fit RF structure. We generated predictions for human performance by implementing an ideal observer that optimally decoded stimulus-evoked spike counts in the model neurons. The ideal observer predicted a saturating reduction in bar orientation discrimination threshold with increasing bar length. We tested 24 humans on an automated, precision-controlled bar orientation discrimination task and observed performance consistent with that predicted. We next queried the ideal observer to discover the RF structure and number of cortical neurons that best matched each participant's performance. Human perception was matched with a median of 24 model neurons firing throughout a 1-s period. The 10 lowest-performing participants were fit with RFs lacking inhibitory sidebands, whereas 12 of the 14 higher-performing participants were fit with RFs containing inhibitory sidebands. Participants whose discrimination improved as bar length increased to 10 mm were fit with longer RFs; those who performed well on the 2-mm bar, with narrower RFs. These results suggest plausible RF features and computational strategies underlying tactile spatial perception and may have implications for perceptual learning.


2019 ◽  
Author(s):  
Mark Andrews

The study of memory for texts has had an long tradition of research in psychology. According to most general accounts, the recognition or recall of items in a text is based on querying a memory representation that is built up on the basis of background knowledge. The objective of this paper is to describe and thoroughly test a Bayesian model of these general accounts. In particular, we present a model that describes how we use our background knowledge to form memories in terms of Bayesian inference of statistical patterns in the text, followed by posterior predictive inference of the words that are typical of those inferred patterns. This provides us with precise predictions about which words will be remembered, whether veridically or erroneously, from any given text. We tested these predictions using behavioural data from a memory experiment using a large sample of randomly chosen texts from a representative corpus of British English. The results show that the probability of remembering any given word in the text, whether falsely or veridically, is well predicted by the Bayesian model. Moreover, compared to nontrivial alternative models of text memory, by every measure used in the analyses, the predictions of the Bayesian model were superior, often overwhelmingly so. We conclude that these results provide strong evidence in favour of the Bayesian account of text memory that we have presented in this paper.


2021 ◽  
Author(s):  
Dmytro Perepolkin ◽  
Benjamin Goodrich ◽  
Ullrika Sahlin

This paper extends the application of indirect Bayesian inference to probability distributions defined in terms of quantiles of the observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and interpretability of its parameters, and are therefore useful for elicitation on observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a version of the Dirichlet prior. The resulting “hybrid” expert elicitation protocol for characterizing uncertainty in parameters using questions about the observable quantities is discussed and contrasted to parametric and predictive elicitation.


2018 ◽  
Vol 4 (1) ◽  
pp. 403-422 ◽  
Author(s):  
Andrea Tacchetti ◽  
Leyla Isik ◽  
Tomaso A. Poggio

Recognizing the people, objects, and actions in the world around us is a crucial aspect of human perception that allows us to plan and act in our environment. Remarkably, our proficiency in recognizing semantic categories from visual input is unhindered by transformations that substantially alter their appearance (e.g., changes in lighting or position). The ability to generalize across these complex transformations is a hallmark of human visual intelligence, which has been the focus of wide-ranging investigation in systems and computational neuroscience. However, while the neural machinery of human visual perception has been thoroughly described, the computational principles dictating its functioning remain unknown. Here, we review recent results in brain imaging, neurophysiology, and computational neuroscience in support of the hypothesis that the ability to support the invariant recognition of semantic entities in the visual world shapes which neural representations of sensory input are computed by human visual cortex.


Author(s):  
Jakub Krukar ◽  
Charu Manivannan ◽  
Mehul Bhatt ◽  
Carl Schultz

Isovist analysis has been typically applied for the study of human perception in indoor built-up spaces. Albeit predominantly in 2D, recent works have explored isovist techniques in 3D. However, 3D applications of isovist analysis simply extrapolate the assumptions of its 2D counterpart, without questioning whether these assumptions remain valid in 3D. They do not: because human perception is embodied, the perception of vertical space differs from the perception of horizontal space. We present a user study demonstrating that an embodied 3D isovist that accounts for this phenomenon (formalised based on the notion of spatial artefacts) predicts human perception of space more accurately than the generic volumetric 3D isovist, specifically with respect to spaciousness and complexity. We suggest that the embodied 3D isovist should be used for 3D analyses in which human perception is of key interest.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 890
Author(s):  
Sergey Oladyshkin ◽  
Farid Mohammadi ◽  
Ilja Kroeker ◽  
Wolfgang Nowak

Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE’s quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy.


Sign in / Sign up

Export Citation Format

Share Document