scholarly journals A Double Error Dynamic Asymptote Model of Associative Learning

2017 ◽  
Author(s):  
Niklas H. Kokkola ◽  
Esther Mondragón ◽  
Eduardo Alonso

ABSTRACTIn this paper a formal model of associative learning is presented which incorporates representational and computational mechanisms that, as a coherent corpus, empower it to make accurate predictions of a wide variety of phenomena that so far have eluded a unified account in learning theory. In particular, the Double Error Dynamic Asymptote (DDA) model introduces: 1) a fully-connected network architecture in which stimuli are represented as temporally clustered elements that associate to each other, so that elements of one cluster engender activity on other clusters, which naturally implements neutral stimuli associations and mediated learning; 2) a predictor error term within the traditional error correction rule (the double error), which reduces the rate of learning for expected predictors; 3) a revaluation associability rate that operates on the assumption that the outcome predictiveness is tracked over time so that prolonged uncertainty is learned, reducing the levels of attention to initially surprising outcomes; and critically 4) a biologically plausible variable asymptote, which encapsulates the principle of Hebbian learning, leading to stronger associations for similar levels of cluster activity. The outputs of a set of simulations of the DDA model are presented along with empirical results from the literature. Finally, the predictive scope of the model is discussed.

1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


Author(s):  
Han Jia ◽  
Xuecheng Zou

A major problem of counting high-density crowded scenes is the lack of flexibility and robustness exhibited by existing methods, and almost all recent state-of-the-art methods only show good performance in estimation errors and density map quality for select datasets. The biggest challenge faced by these methods is the analysis of similar features between the crowd and background, as well as overlaps between individuals. Hence, we propose a light and easy-to-train network for congestion cognition based on dilated convolution, which can exponentially enlarge the receptive field, preserve original resolution, and generate a high-quality density map. With the dilated convolutional layers, the counting accuracy can be enhanced as the feature map keeps its original resolution. By removing fully-connected layers, the network architecture becomes more concise, thereby reducing resource consumption significantly. The flexibility and robustness improvements of the proposed network compared to previous methods were validated using the variance of data size and different overlap levels of existing open source datasets. Experimental results showed that the proposed network is suitable for transfer learning on different datasets and enhances crowd counting in highly congested scenes. Therefore, the network is expected to have broader applications, for example in Internet of Things and portable devices.


2018 ◽  
Vol 27 (2) ◽  
pp. 187-192 ◽  
Author(s):  
Alexander M. Shephard ◽  
Vadim Aksenov ◽  
C. David Rollo

Many terrestrial and aquatic animals learn associations between environmental features and chemical cues of mortality risk (e.g. conspecific alarm pheromones or predator-derived cues), but the chemical nature of the cues that mediate this type of learning are rarely considered. Fatty acid necromones (particularly oleic and linoleic acids) are well established as cues associated with dead or injured conspecifics. Necromones elicit risk aversive behavior across diverse arthropod phylogenies, yet they have not been linked to associative learning. Here, we provide evidence that necromones can mediate associative olfactory learning in an insect by acting as an aversive reinforcement. When house crickets (Achetadomesticus) were forced to inhabit an environment containing an initially attractive odor along with a necromone cue, they subsequently avoided the previously attractive odor and displayed tolerance for an initially unattractive odor. This occurred when crickets were conditioned with linoleic acid but not when they were conditioned with oleic acid. Similar aversive learning occurred when crickets were conditioned with ethanol body extracts composed of male and female corpses combined, as well as extracts composed of female corpses alone. Conditioning with male body extract did not elicit learned aversion in either sex, even though we detected no notable differences in fatty acid composition between male and female body extracts. We suggest that necromone-mediated learning responses might vary depending on synergistic or antagonistic interactions with sex or species-specific recognition cues.


2021 ◽  
Vol 7 ◽  
pp. e638
Author(s):  
Md Nahidul Islam ◽  
Norizam Sulaiman ◽  
Fahmid Al Farid ◽  
Jia Uddin ◽  
Salem A. Alyami ◽  
...  

Hearing deficiency is the world’s most common sensation of impairment and impedes human communication and learning. Early and precise hearing diagnosis using electroencephalogram (EEG) is referred to as the optimum strategy to deal with this issue. Among a wide range of EEG control signals, the most relevant modality for hearing loss diagnosis is auditory evoked potential (AEP) which is produced in the brain’s cortex area through an auditory stimulus. This study aims to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. First, the raw AEP data is transformed into time-frequency images through the wavelet transformation. Then, lower-level functionality is eliminated using a pre-trained network. Here, an improved-VGG16 architecture has been designed based on removing some convolutional layers and adding new layers in the fully connected block. Subsequently, the higher levels of the neural network architecture are fine-tuned using the labelled time-frequency images. Finally, the proposed method’s performance has been validated by a reputed publicly available AEP dataset, recorded from sixteen subjects when they have heard specific auditory stimuli in the left or right ear. The proposed method outperforms the state-of-art studies by improving the classification accuracy to 96.87% (from 57.375%), which indicates that the proposed improved-VGG16 architecture can significantly deal with AEP response in early hearing loss diagnosis.


2000 ◽  
Vol 23 (4) ◽  
pp. 550-551
Author(s):  
Mikhail N. Zhadin

The absence of a clear influence of an animal's behavioral responses to Hebbian associative learning in the cerebral cortex requires some changes in the Hebbian learning rules. The participation of the brain monoaminergic systems in Hebbian associative learning is considered.


Author(s):  
Chenyang Li ◽  
Xin Zhang ◽  
Lufan Liao ◽  
Lianwen Jin ◽  
Weixin Yang

The skeleton based gesture recognition is gaining more popularity due to its wide possible applications. The key issues are how to extract discriminative features and how to design the classification model. In this paper, we first leverage a robust feature descriptor, path signature (PS), and propose three PS features to explicitly represent the spatial and temporal motion characteristics, i.e., spatial PS (S PS), temporal PS (T PS) and temporal spatial PS (T S PS). Considering the significance of fine hand movements in the gesture, we propose an ”attention on hand” (AOH) principle to define joint pairs for the S PS and select single joint for the T PS. In addition, the dyadic method is employed to extract the T PS and T S PS features that encode global and local temporal dynamics in the motion. Secondly, without the recurrent strategy, the classification model still faces challenges on temporal variation among different sequences. We propose a new temporal transformer module (TTM) that can match the sequence key frames by learning the temporal shifting parameter for each input. This is a learning-based module that can be included into standard neural network architecture. Finally, we design a multi-stream fully connected layer based network to treat spatial and temporal features separately and fused them together for the final result. We have tested our method on three benchmark gesture datasets, i.e., ChaLearn 2016, ChaLearn 2013 and MSRC-12. Experimental results demonstrate that we achieve the state-of-the-art performance on skeleton-based gesture recognition with high computational efficiency.


2021 ◽  
Vol 15 ◽  
Author(s):  
Shirin Dora ◽  
Sander M. Bohte ◽  
Cyriel M. A. Pennartz

Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy.


Author(s):  
R. М. Peleshchak ◽  
V. V. Lytvyn ◽  
О. І. Cherniak ◽  
І. R. Peleshchak ◽  
М. V. Doroshenko

Context. To reduce the computational resource time in the problems of diagnosing and recognizing distorted images based on a fully connected stochastic pseudospin neural network, it becomes necessary to thin out synaptic connections between neurons, which is solved using the method of diagonalizing the matrix of synaptic connections without losing interaction between all neurons in the network. Objective. To create an architecture of a stochastic pseudo-spin neural network with diagonal synaptic connections without loosing the interaction between all the neurons in the layer to reduce its learning time. Method. The paper uses the Hausholder method, the method of compressing input images based on the diagonalization of the matrix of synaptic connections and the computer mathematics system MATLAB for converting a fully connected neural network into a tridiagonal form with hidden synaptic connections between all neurons. Results. We developed a model of a stochastic neural network architecture with sparse renormalized synaptic connections that take into account deleted synaptic connections. Based on the transformation of the synaptic connection matrix of a fully connected neural network into a Hessenberg matrix with tridiagonal synaptic connections, we proposed a renormalized local Hebb rule. Using the computer mathematics system “WolframMathematica 11.3”, we calculated, as a function of the number of neurons N, the relative tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network. Conclusions. We found that with an increase in the number of neurons, the tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network, decreases according to a hyperbolic law. Depending on the direction of pseudospin neurons, we proposed a classification of a renormalized neural network with a ferromagnetic structure, an antiferromagnetic structure, and a dipole glass.


Author(s):  
Péter Kovács ◽  
Gergő Bognár ◽  
Christian Huber ◽  
Mario Huemer

In this paper, we introduce VPNet, a novel model-driven neural network architecture based on variable projection (VP). Applying VP operators to neural networks results in learnable features, interpretable parameters, and compact network structures. This paper discusses the motivation and mathematical background of VPNet and presents experiments. The VPNet approach was evaluated in the context of signal processing, where we classified a synthetic dataset and real electrocardiogram (ECG) signals. Compared to fully connected and one-dimensional convolutional networks, VPNet offers fast learning ability and good accuracy at a low computational cost of both training and inference. Based on these advantages and the promising results obtained, we anticipate a profound impact on the broader field of signal processing, in particular on classification, regression and clustering problems.


2019 ◽  
Author(s):  
Melissa J. Sharpe ◽  
Hannah M. Batchelor ◽  
Lauren E. Mueller ◽  
Chun Yun Chang ◽  
Etienne J.P. Maes ◽  
...  

AbstractDopamine neurons fire transiently in response to unexpected rewards. These neural correlates are proposed to signal the reward prediction error described in model-free reinforcement learning algorithms. This error term represents the unpredicted or ‘excess’ value of the rewarding event. In model-free reinforcement learning, this value is then stored as part of the learned value of any antecedent cues, contexts or events, making them intrinsically valuable, independent of the specific rewarding event that caused the prediction error. In support of equivalence between dopamine transients and this model-free error term, proponents cite causal optogenetic studies showing that artificially induced dopamine transients cause lasting changes in behavior. Yet none of these studies directly demonstrate the presence of cached value under conditions appropriate for associative learning. To address this gap in our knowledge, we conducted three studies where we optogenetically activated dopamine neurons while rats were learning associative relationships, both with and without reward. In each experiment, the antecedent cues failed to acquired value and instead entered into value-independent associative relationships with the other cues or rewards. These results show that dopamine transients, constrained within appropriate learning situations, support valueless associative learning.


Sign in / Sign up

Export Citation Format

Share Document