scholarly journals A theory of learning to infer

2019 ◽  
Author(s):  
Ishita Dasgupta ◽  
Eric Schulz ◽  
Joshua B. Tenenbaum ◽  
Samuel J. Gershman

AbstractBayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people under-react to prior probabilities (base rate neglect), other studies find that people under-react to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model “learns to infer.” We show that this theory can explain why and when people under-react to the data or the prior, and a new experiment demonstrates that these two forms of under-reaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference.

2017 ◽  
Author(s):  
Ishita Dasgupta ◽  
Eric Schulz ◽  
Noah D. Goodman ◽  
Samuel J. Gershman

AbstractBayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain’s limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants’ responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference.


2018 ◽  
Author(s):  
Seth W. Egger ◽  
Mehrdad Jazayeri

AbstractBayesian models of behavior have advanced the idea that humans combine prior beliefs and sensory observations to minimize uncertainty. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent and manipulate probability distributions. An alternative view is that brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property makes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that inference strategies humans deploy may deviate from Bayes-optimal integration when the computational demands are high.


2013 ◽  
pp. 1516-1534
Author(s):  
Lochi Yu ◽  
Cristian Ureña

Since the first recordings of brain electrical activity more than 100 years ago remarkable contributions have been done to understand the brain functionality and its interaction with environment. Regardless of the nature of the brain-computer interface BCI, a world of opportunities and possibilities has been opened not only for people with severe disabilities but also for those who are pursuing innovative human interfaces. Deeper understanding of the EEG signals along with refined technologies for its recording is helping to improve the performance of EEG based BCIs. Better processing and features extraction methods, like Independent Component Analysis (ICA) and Wavelet Transform (WT) respectively, are giving promising results that need to be explored. Different types of classifiers and combination of them have been used on EEG BCIs. Linear, neural and nonlinear Bayesian have been the most used classifiers providing accuracies ranges between 60% and 90%. Some demand more computational resources like Support Vector Machines (SVM) classifiers but give good generality. Linear Discriminant Analysis (LDA) classifiers provide poor generality but low computational resources, making them optimal for some real time BCIs. Better classifiers must be developed to tackle the large patterns variability across different subjects by using every available resource, method or technology.


2020 ◽  
Vol 29 (2) ◽  
pp. 160 ◽  
Author(s):  
Frédéric Allaire ◽  
Jean-Baptiste Filippi ◽  
Vivien Mallet

Numerical simulations of wildfire spread can provide support in deciding firefighting actions but their predictive performance is challenged by the uncertainty of model inputs stemming from weather forecasts, fuel parameterisation and other fire characteristics. In this study, we assign probability distributions to the inputs and propagate the uncertainty by running hundreds of Monte Carlo simulations. The ensemble of simulations is summarised via a burn probability map whose evaluation based on the corresponding observed burned surface is not obvious. We define several properties and introduce probabilistic scores that are common in meteorological applications. Based on these elements, we evaluate the predictive performance of our ensembles for seven fires that occurred in Corsica from mid-2017 to early 2018. We obtain fair performance in some of the cases but accuracy and reliability of the forecasts can be improved. The ensemble generation can be accomplished in a reasonable amount of time and could be used in an operational context provided that sufficient computational resources are available. The proposed probabilistic scores are also appropriate in a calibration process to improve the ensembles.


2017 ◽  
Vol 1 (3) ◽  
Author(s):  
Vito Di Maio ◽  
Francesco Ventriglia ◽  
Silvia Santillo

Synaptic transmission is the basic mechanism of information transfer between neurons not only in the brain, but along all the nervous system. In this review we will briefly summarize some of the main parameters that produce stochastic variability in the synaptic response. This variability produces different effects on important brain phenomena, like learning and memory, and, alterations of its basic factors can cause brain malfunctioning.


2018 ◽  
Vol 35 (15) ◽  
pp. 2674-2676 ◽  
Author(s):  
Shubham Chandak ◽  
Kedar Tatwawadi ◽  
Idoia Ochoa ◽  
Mikel Hernaez ◽  
Tsachy Weissman

Abstract Motivation High-Throughput Sequencing technologies produce huge amounts of data in the form of short genomic reads, associated quality values and read identifiers. Because of the significant structure present in these FASTQ datasets, general-purpose compressors are unable to completely exploit much of the inherent redundancy. Although there has been a lot of work on designing FASTQ compressors, most of them lack in support of one or more crucial properties, such as support for variable length reads, scalability to high coverage datasets, pairing-preserving compression and lossless compression. Results In this work, we propose SPRING, a reference-free compressor for FASTQ files. SPRING supports a wide variety of compression modes and features, including lossless compression, pairing-preserving compression, lossy compression of quality values, long read compression and random access. SPRING achieves substantially better compression than existing tools, for example, SPRING compresses 195 GB of 25× whole genome human FASTQ from Illumina’s NovaSeq sequencer to less than 7 GB, around 1.6× smaller than previous state-of-the-art FASTQ compressors. SPRING achieves this improvement while using comparable computational resources. Availability and implementation SPRING can be downloaded from https://github.com/shubhamchandak94/SPRING. Supplementary information Supplementary data are available at Bioinformatics online.


1964 ◽  
Vol 17 (4) ◽  
pp. 414-418
Author(s):  
L. Gérardin

The observation of a radar display by a human operator leads to the establishment of aircraft tracks. These tracks are subsequently used by the controller. More and more often, it is proposed to replace both PPI display and human observer by an automatic computer, either special or general purpose, to perform tracking.In the present paper the basic performances of these two modes of operation are examined, taking into account the psychological and physiological features of human vision and hence the mental association of the viewer. The computer is more precise, but more costly and, when saturated, the drop in performance is abrupt. The number of tracks handled by a human operator is small, but the brain is very versatile and works very well in confused situations, with a slower drop in efficiency than the computer.


2007 ◽  
Vol 19 (10) ◽  
pp. 2780-2796 ◽  
Author(s):  
Shun-ichi Amari

When there are a number of stochastic models in the form of probability distributions, one needs to integrate them. Mixtures of distributions are frequently used, but exponential mixtures also provide a good means of integration. This letter proposes a one-parameter family of integration, called α-integration, which includes all of these well-known integrations. These are generalizations of various averages of numbers such as arithmetic, geometric, and harmonic averages. There are psychophysical experiments that suggest that α-integrations are used in the brain. The α-divergence between two distributions is defined, which is a natural generalization of Kullback-Leibler divergence and Hellinger distance, and it is proved that α-integration is optimal in the sense of minimizing α-divergence. The theory is applied to generalize the mixture of experts and the product of experts to the α-mixture of experts. The α-predictive distribution is also stated in the Bayesian framework.


Radiocarbon ◽  
2001 ◽  
Vol 43 (2A) ◽  
pp. 373-380 ◽  
Author(s):  
Peter Steier ◽  
Werner Rom ◽  
Stephan Puchegger

The probabilistic radiocarbon calibration approach, which largely has replaced the intercept method in 14C dating, is based on the so-called Bayes' theorem (Bayes 1763). Besides single-sample calibration, Bayesian mathematics also supplies tools for combining 14C results of many samples with independent archaeological information such as typology or stratigraphy (Buck et al. 1996). However, specific assumptions in the “prior probabilities”, used to transform the archaeological information into mathematical probability distributions, may bias the results (Steier and Rom 2000). A general technique for guarding against such a bias is “sensitivity analysis”, in which a range of possible prior probabilities is tested. Only results that prove robust in this analysis should be used. We demonstrate the impact of this method for an assumed, yet realistic case of stratigraphically ordered samples from the Hallstatt period, i.e. the Early Iron Age in Central Europe.


2017 ◽  
pp. 16-24
Author(s):  
Mildren Yaneth Uscategui Blanco ◽  
Adriana Boscan Andrade

El propósito general de la investigación es el de analizar el enfoque motivador que ejerce la Neuroeducación para el proceso de aprendizaje del cálculo inicial en los estudiantes de la Universidad Francisco de Paula Santander. El sustento de la investigación estuvo basado en autores como: (Mora, 2013), (Campos, 2010), (Cotto, 2009), Blakemore y Frith (2007), (De La Cruz, 2004) entre otros. La metodología aplicada fue un estudio de corte cualitativo, tomando como población constituida por los estudiantes del primer semestre de las diversas carreras de la Universidad Francisco de Paula Santander para el primer semestre del 2017. Las técnicas para la recolección de datos fueron la observación, la entrevista a profundidad. Entre los resultados se concluyó que es necesario identificar el grado de motivación para el aprendizaje del Cálculo en los estudiantes de la Universidad Francisco de Paula Santander, para garantizar la educación de calidad en las diferentes áreas de estudio, la comprensión de los estudiantes en el proceso de formación que permita adquirir conocimientos necesarios para su formación profesional son indispensables para su desarrollo profesional. Se recomienda utilizar la neurociencia como herramienta para entender cómo aprende el cerebro. Este conocimiento nos ayudará a mejorar sustancialmente la eficacia de los procesos de enseñanza-aprendizaje. Palabras Clave: Aprendizaje, Cerebro, Educación Matemática, Neurociencias, Neuroeducación.   Abstract   The general purpose of the research is to analyze the motivational approach that Neuroeducation exercises for the learning process of the initial calculation in the students of the Francisco de Paula Santander University. The sustenance of the research was based on authors such as: (Mora, 2013), (Campos, 2010), (Cotto, 2009), Blakemore and Frith (2007), (De La Cruz, 2004), among others. The methodology applied was a qualitative study, taking as a population constituted by the students of the first semester of the diverse careers of the Francisco de Paula Santander University for the first semester of 2017. The techniques for the data collection were the observation, the deep interview. Among the results it was concluded that it is necessary to identify the degree of motivation for the learning of the Calculus in the students of the Francisco de Paula Santander University, to guarantee the quality education in the different areas of study, the students' understanding in the process Training to acquire the knowledge necessary for their professional training are essential for their professional development. It is recommended to use neuroscience as a tool to understand how the brain learns. This knowledge will help us to substantially improve the effectiveness of the teaching-learning processes.   Key Words: Learning, Brain, Mathematics Education, Neurosciences, Neuroeducation.   ResumoO objetivo geral da pesquisa é analisar a abordagem motivacional que a Neuroeducação exerce para o processo de aprendizagem do cálculo inicial nos alunos da Universidade Francisco de Paula Santander. A sustentação da pesquisa baseou-se em autores como: (Mora, 2013), (Campos, 2010), (Cotto, 2009), Blakemore e Frith (2007), (De La Cruz, 2004), entre outros. A metodologia aplicada foi um estudo qualitativo, tendo como população constituída pelos alunos do primeiro semestre das diversas carreiras da Universidade Francisco de Paula Santander para o primeiro semestre de 2017. As técnicas para a coleta de dados foram a observação, o aprofundamento entrevista. Entre os resultados concluiu-se que é necessário identificar o grau de motivação para a aprendizagem do Cálculo nos alunos da Universidade Francisco de Paula Santander, para garantir a qualidade da educação nas diferentes áreas de estudo, o entendimento dos alunos em O processo de formação para adquirir o conhecimento necessário para a sua formação profissional é essencial para o seu desenvolvimento profissional. Recomenda-se usar a neurociência como uma ferramenta para entender como o cérebro aprende. Este conhecimento nos ajudará a melhorar substancialmente a eficácia dos processos de ensino-aprendizagem.   Palavras-chave: Aprendizagem, Cérebro, Educação Matemática, Neurociências, Neuroeducação.


Sign in / Sign up

Export Citation Format

Share Document