scholarly journals Probabilistic associative learning suffices for learning the temporal structure of multiple sequences

2019 ◽  
Author(s):  
Ramon H. Martinez ◽  
Anders Lansner ◽  
Pawel Herman

AbstractMany brain phenomena both at the cognitive and behavior level exhibit remarkable sequential characteristics. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.

PLoS ONE ◽  
2019 ◽  
Vol 14 (8) ◽  
pp. e0220161 ◽  
Author(s):  
Ramon H. Martinez ◽  
Anders Lansner ◽  
Pawel Herman

2017 ◽  
Author(s):  
André Luzardo ◽  
Eduardo Alonso ◽  
Esther Mondragón

AbstractComputational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used.Author SummaryHow does the time of events affect the way we learn about associations between these events? Computational models have made great contributions to our understanding of associative learning, but they usually do not perform very well when time is taken into account. Models of timing have reached high levels of accuracy in describing timed behaviour, but they usually do not have much to say about associations. A unified approach would involve combining associative learning and timing models into a single framework. This article takes just this approach. It combines the influential Rescorla-Wagner associative model with a timing model based on the Drift-Diffusion process, and shows how the resultant model can account for a number of learning and timing phenomena. The article also compares the new model to others that are similar in scope.


2021 ◽  
Vol 29 (1) ◽  
pp. 73-87 ◽  
Author(s):  
Margaretha Gansterer ◽  
Richard F. Hartl

AbstractLogistics providers have to utilize available capacities efficiently in order to cope with increasing competition and desired quality of service. One possibility to reduce idle capacity is to build coalitions with other players on the market. While the willingness to enter such coalitions does exist in the logistics industry, the success of collaborations strongly depends on mutual trust and behavior of participants. Hence, a proper mechanism design, where carriers do not have incentives to deviate from jointly established rules, is needed. We propose to use a combinatorial auction system, for which several properties are already well researched but little is known about the auction’s first phase, where carriers have to decide on the set of requests offered to the auction. Profitable selection strategies, aiming at maximization of total collaboration gains, do exist. However, the impact on individual outcomes, if one or more players deviate from jointly agreed selection rules is yet to be researched. We analyze whether participants in an auction-based transport collaboration face a Prisoners’ Dilemma. While it is possible to construct such a setting, our computational study reveals that carriers do not profit from declining the cooperative strategy. This is an important and insightful finding, since it further strengthens the practical applicability of auction-based trading mechanisms in collaborative transportation.


2018 ◽  
Author(s):  
Amrit Kashyap ◽  
Shella Keilholz

AbstractBrain Network Models have become a promising theoretical framework in simulating signals that are representative of whole brain activity such as resting state fMRI. However, it has been difficult to compare the complex brain activity between simulated and empirical data. Previous studies have used simple metrics that surmise coordination between regions such as functional connectivity, and we extend on this by using various different dynamical analysis tools that are currently used to understand resting state fMRI. We show that certain properties correspond to the structural connectivity input that is shared between the models, and certain dynamic properties relate more to the mathematical description of the Brain Network Model. We conclude that the dynamic properties that gauge more temporal structure rather than spatial coordination in the rs-fMRI signal seem to provide the largest contrasts between different BNMs and the unknown empirical dynamical system. Our results will be useful in constraining and developing more realistic simulations of whole brain activity.


2019 ◽  
Author(s):  
Jennifer Stiso ◽  
Marie-Constance Corsi ◽  
Javier Omar Garcia ◽  
Jean M Vettel ◽  
Fabrizio De Vico Fallani ◽  
...  

Motor imagery-based brain-computer interfaces (BCIs) use an individual’s ability to volitionally modulate localized brain activity, often as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Formal experiments designed to probe the nature of BCI learning have offered initial evidence that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using magnetoencephalography. Specifically, we employ a minimally constrained matrix decomposition method -- non-negative matrix factorization -- to simultaneously identify regularized, covarying subgraphs of functional connectivity and behavior, and to detect the time-varying expression of each subgraph. We find that learning is marked by distributed brain-behavior relations: swifter learners displayed many subgraphs whose temporal expression tracked performance. Learners also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in networks important for sustaining attention. After formalizing the model in the framework of network control theory, we test the model and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention. The nature of our contribution to the neuroscience of BCI learning is therefore both computational and theoretical; we first use a minimally-constrained, individual specific method of identifying mesoscale structure in dynamic brain activity to show how global connectivity and interactions between distributed networks supports BCI learning, and then we use a formal network model of control to lend theoretical support to the hypothesis that these identified subgraphs are well suited to modulate attention.


2021 ◽  
pp. 1-55
Author(s):  
Amit Naskar ◽  
Anirudh Vattikonda ◽  
Gustavo Deco ◽  
Dipanjan Roy ◽  
Arpan Banerjee

Abstract Previous computational models have related spontaneous resting-state brain activity with local excitatory−inhibitory balance in neuronal populations. However, how underlying neurotransmitter kinetics associated with E-I balance governs resting state spontaneous brain dynamics remains unknown. Understanding the mechanisms by virtue of which fluctuations in neurotransmitter concentrations, a hallmark of a variety of clinical conditions relate to functional brain activity is of critical importance. We propose a multi-scale dynamic mean field model (MDMF) – a system of coupled differential equations for capturing the synaptic gating dynamics in excitatory and inhibitory neural populations as a function of neurotransmitter kinetics. Individual brain regions are modelled as population of MDMF and are connected by realistic connection topologies estimated from Diffusion Tensor Imaging data. First, MDMF successfully predicts resting-state functionalconnectivity. Second, our results show that optimal range of glutamate and GABA neurotransmitter concentrations subserve as the dynamic working point of the brain, that is, the state of heightened metastability observed in empirical blood-oxygen-level dependent signals. Third, for predictive validity the network measures of segregation (modularity and clustering coefficient) and integration (global efficiency and characteristic path length) from existing healthy and pathological brain network studies could be captured by simulated functional connectivity from MDMF model.


Author(s):  
Juergen Dukart ◽  
Ross D. Markello ◽  
Adrian Raine ◽  
Simon B. Eickhoff ◽  
Timm B. Poeppl

2019 ◽  
Vol 30 (3) ◽  
pp. 1708-1715
Author(s):  
Andrés Canales-Johnson ◽  
Emiliano Merlo ◽  
Tristan A Bekinschtein ◽  
Anat Arzi

Abstract Recent evidence indicates that humans can learn entirely new information during sleep. To elucidate the neural dynamics underlying sleep-learning, we investigated brain activity during auditory–olfactory discriminatory associative learning in human sleep. We found that learning-related delta and sigma neural changes are involved in early acquisition stages, when new associations are being formed. In contrast, learning-related theta activity emerged in later stages of the learning process, after tone–odor associations were already established. These findings suggest that learning new associations during sleep is signaled by a dynamic interplay between slow-waves, sigma, and theta activity.


Sign in / Sign up

Export Citation Format

Share Document