scholarly journals Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation

2016 ◽  
Author(s):  
Chaitanya K. Ryali ◽  
Gautam Reddy ◽  
Angela J. Yu

AbstractUnderstanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the default assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intensive, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a “persistent prior” influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both difficult and (thus) unnecessary for making Bayes-optimal predictions.

2017 ◽  
Vol 114 (19) ◽  
pp. E3859-E3868 ◽  
Author(s):  
Florent Meyniel ◽  
Stanislas Dehaene

Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.


2018 ◽  
Author(s):  
Samuel A. Nastase ◽  
Ben Davis ◽  
Uri Hasson

AbstractCurrent neurobiological models assign a central role to predictive processes calibrated to environmental statistics. Neuroimaging studies examining the encoding of stimulus uncertainty have relied almost exclusively on manipulations in which stimuli were presented in a single sensory modality, and further assumed that neural responses vary monotonically with uncertainty. This has left a gap in theoretical development with respect to two core issues: i) are there cross-modal brain systems that encode input uncertainty in way that generalizes across sensory modalities, and ii) are there brain systems that track input uncertainty in a non-monotonic fashion? We used multivariate pattern analysis to address these two issues using auditory, visual and audiovisual inputs. We found signatures of cross-modal encoding in frontoparietal, orbitofrontal, and association cortices using a searchlight cross-classification analysis where classifiers trained to discriminate levels of uncertainty in one modality were tested in another modality. Additionally, we found widespread systems encoding uncertainty non-monotonically using classifiers trained to discriminate intermediate levels of uncertainty from both the highest and lowest uncertainty levels. These findings comprise the first comprehensive report of cross-modal and non-monotonic neural sensitivity to statistical regularities in the environment, and suggest that conventional paradigms testing for monotonic responses to uncertainty in a single sensory modality may have limited generalizability.


Author(s):  
Jasmin Léveillé ◽  
◽  
Isao Hayashi ◽  
Kunihiko Fukushima ◽  
◽  
...  

Recent advances in machine learning and computer vision have led to the development of several sophisticated learning schemes for object recognition by convolutional networks. One relatively simple learning rule, the Winner-Kill-Loser (WKL), was shown to be efficient at learning higher-order features in the neocognitron model when used in a written digit classification task. The WKL rule is one variant of incremental clustering procedures that adapt the number of cluster components to the input data. The WKL rule seeks to provide a complete, yet minimally redundant, covering of the input distribution. It is difficult to apply this approach directly to high-dimensional spaces since it leads to a dramatic explosion in the number of clustering components. In this work, a small generalization of the WKL rule is proposed to learn from high-dimensional data. We first show that the learning rule leads mostly to V1-like oriented cells when applied to natural images, suggesting that it captures second-order image statistics not unlike variants of Hebbian learning. We further embed the proposed learning rule into a convolutional network, specifically, the Neocognitron, and show its usefulness on a standard written digit recognition benchmark. Although the new learning rule leads to a small reduction in overall accuracy, this small reduction is accompanied by a major reduction in the number of coding nodes in the network. This in turn confirms that by learning statistical regularities rather than covering an entire input space, it may be possible to incrementally learn and retain most of the useful structure in the input distribution.


Author(s):  
MOHSEN EBRAHIMI MOGHADDAM

Motion blur is one of the most common causes of image corruptions caused by blurring. Several methods have been presented up to now, which precisely identify linear motion blur parameters, but most of them possessed low precision in the presence of the noise. The present paper is aimed to introduce an algorithm for estimating linear motion blur parameters in noisy images. This study presents a method to estimate motion direction by using Radon transform, which is followed by the application of two other different methods to estimate motion length; the first of which is based on one-dimensional power spectrum to estimate parameters of noise free images and the second uses bispectrum modeling in noisy images. A Feed-Forward Back Propagation neural network has been designed on the basis of Weierstrass approximation theorem to model bispectrum and the Delta rule as the network learning rule. The methods were tested on several standard images like Camera man, Lena, Lake, etc. that were degraded by linear motion blur and additive noise. The experimental results have been satisfactory. The proposed method, compared to other related methods, suggests an improvement in the supported lowest SNR and precision of estimation.


2019 ◽  
Vol 12 (4) ◽  
pp. 1299-1317 ◽  
Author(s):  
Jan Philipp Dietrich ◽  
Benjamin Leon Bodirsky ◽  
Florian Humpenöder ◽  
Isabelle Weindl ◽  
Miodrag Stevanović ◽  
...  

Abstract. The open-source modeling framework MAgPIE (Model of Agricultural Production and its Impact on the Environment) combines economic and biophysical approaches to simulate spatially explicit global scenarios of land use within the 21st century and the respective interactions with the environment. Besides various other projects, it was used to simulate marker scenarios of the Shared Socioeconomic Pathways (SSPs) and contributed substantially to multiple IPCC assessments. However, with growing scope and detail, the non-linear model has become increasingly complex, computationally intensive and non-transparent, requiring structured approaches to improve the development and evaluation of the model.Here, we provide an overview on version 4 of MAgPIE and how it addresses these issues of increasing complexity using new technical features: modular structure with exchangeable module implementations, flexible spatial resolution, in-code documentation, automatized code checking, model/output evaluation and open accessibility. Application examples provide insights into model evaluation, modular flexibility and region-specific analysis approaches. While this paper is focused on the general framework as such, the publication is accompanied by a detailed model documentation describing contents and equations, and by model evaluation documents giving insights into model performance for a broad range of variables.With the open-source release of the MAgPIE 4 framework, we hope to contribute to more transparent, reproducible and collaborative research in the field. Due to its modularity and spatial flexibility, it should provide a basis for a broad range of land-related research with economic or biophysical, global or regional focus.


1995 ◽  
Vol 7 (4) ◽  
pp. 845-865 ◽  
Author(s):  
Jörg Bruske ◽  
Gerald Sommer

Dynamic cell structures (DCS) represent a family of artificial neural architectures suited both for unsupervised and supervised learning. They belong to the recently (Martinetz 1994) introduced class of topology representing networks (TRN) that build perfectly topology preserving feature maps. DCS employ a modified Kohonen learning rule in conjunction with competitive Hebbian learning. The Kohonen type learning rule serves to adjust the synaptic weight vectors while Hebbian learning establishes a dynamic lateral connection structure between the units reflecting the topology of the feature manifold. In case of supervised learning, i.e., function approximation, each neural unit implements a radial basis function, and an additional layer of linear output units adjusts according to a delta-rule. DCS is the first RBF-based approximation scheme attempting to concurrently learn and utilize a perfectly topology preserving map for improved performance. Simulations on a selection of CMU-Benchmarks indicate that the DCS idea applied to the growing cell structure algorithm (Fritzke 1993c) leads to an efficient and elegant algorithm that can beat conventional models on similar tasks.


2021 ◽  
Author(s):  
Fraser Aitken ◽  
Peter Kok

We constantly exploit the statistical regularities in our environment to help guide our perception. The hippocampus has been suggested to play a pivotal role in both learning environmental statistics, as well as exploiting them to generate perceptual predictions. However, it is unclear how the hippocampus balances encoding new predictive associations with the retrieval of existing ones. Here, we present the results of two high resolution human fMRI studies (N=24 for both experiments) directly investigating this. Participants were exposed to auditory cues that predicted the identity of an upcoming visual shape (with 75% validity). Using multivoxel decoding analysis, we found that the hippocampus initially preferentially represented unexpected shapes (i.e., those that violated the cue regularities), but later switched to representing the cue-predicted shape regardless of which was actually presented. These findings demonstrate that the hippocampus in involved both acquiring and exploiting predictive associations, and switches between these modes depending on whether learning is ongoing or complete.


2008 ◽  
Vol 10 (4) ◽  
pp. 331-343 ◽  
Author(s):  
M. Shourian ◽  
S. Jamshid Mousavi ◽  
M. B. Menhaj ◽  
E. Jabbari

Heuristic search techniques are highly flexible, though they represent computationally intensive optimization methods that may require thousands of evaluations of expensive objective functions. This paper integrates MODSIM, a generalized river basin network flow model, a particle swarm optimization (PSO) algorithm and artificial neural networks into a modeling framework for optimum water allocations at basin scale. MODSIM is called in the PSO model to simulate a river basin system operation and to evaluate the fitness of each set of selected design and operational variables with respect to the model's objective function, which is the minimization of the system's design and operational cost. Since the direct incorporation of MODSIM into a PSO algorithm is computationally prohibitive, an ANN model as a meta-model is trained to approximate the MODSIM modeling tool. The resulting model is used in the problem of optimal design and operation of the upstream Sirvan river basin in Iran as a case study. The computational efficiency of the model makes it possible to analyze the model performance through changing its parameters so that better solutions are obtained compared to those of the original PSO–MODSIM model.


1996 ◽  
Vol 41 (6) ◽  
pp. 558-559
Author(s):  
Timothy Anderson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document