scholarly journals The self-organized learning of noisy environmental stimuli requires distinct phases of plasticity

2019 ◽  
Author(s):  
Steffen Krüppel ◽  
Christian Tetzlaff

AbstractAlong sensory pathways, representations of environmental stimuli become increasingly sparse and expanded. If additionally the feed-forward synaptic weights are structured according to the inherent organization of stimuli, the increase in sparseness and expansion leads to a reduction of sensory noise. However, it is unknown how the synapses in the brain form the required structure, especially given the omnipresent noise of environmental stimuli. Here, we employ a combination of synaptic plasticity and intrinsic plasticity - adapting the excitability of each neuron individually - and present stimuli with an inherent organization to a feed-forward network. We observe that intrinsic plasticity maintains the sparseness of the neural code and thereby enables synaptic plasticity to learn the organization of stimuli in low-noise environments. Nevertheless, even high levels of noise can be handled after a subsequent phase of readaptation of the neuronal excitabilities by intrinsic plasticity. Interestingly, during this phase the synaptic structure has to be maintained. These results demonstrate that learning in the presence of noise requires adaptation of the synaptic structure but also of the neuronal properties in two distinct phases of learning: an encoding phase, during which the inherent organization of the environmental stimuli is learned, followed by a readaptation phase to readapt the neuronal system according to the current level of noise. The necessity of these distinct phases of learning suggests a new role for synaptic consolidation.

2020 ◽  
Vol 4 (1) ◽  
pp. 174-199 ◽  
Author(s):  
Steffen Krüppel ◽  
Christian Tetzlaff

Along sensory pathways, representations of environmental stimuli become increasingly sparse and expanded. If additionally the feed-forward synaptic weights are structured according to the inherent organization of stimuli, the increase in sparseness and expansion leads to a reduction of sensory noise. However, it is unknown how the synapses in the brain form the required structure, especially given the omnipresent noise of environmental stimuli. Here, we employ a combination of synaptic plasticity and intrinsic plasticity—adapting the excitability of each neuron individually—and present stimuli with an inherent organization to a feed-forward network. We observe that intrinsic plasticity maintains the sparseness of the neural code and thereby allows synaptic plasticity to learn the organization of stimuli in low-noise environments. Nevertheless, even high levels of noise can be handled after a subsequent phase of readaptation of the neuronal excitabilities by intrinsic plasticity. Interestingly, during this phase the synaptic structure has to be maintained. These results demonstrate that learning and recalling in the presence of noise requires the coordinated interplay between plasticity mechanisms adapting different properties of the neuronal circuit.


2015 ◽  
Vol 793 ◽  
pp. 483-488
Author(s):  
N. Aminudin ◽  
Marayati Marsadek ◽  
N.M. Ramli ◽  
T.K.A. Rahman ◽  
N.M.M. Razali ◽  
...  

The computation of security risk index in identifying the system’s condition is one of the major concerns in power system analysis. Traditional method of this assessment is highly time consuming and infeasible for direct on-line implementation. Thus, this paper presents the application of Multi-Layer Feed Forward Network (MLFFN) to perform the prediction of voltage collapse risk index due to the line outage occurrence. The proposed ANN model consider load at the load buses as well as weather condition at the transmission lines as the input. In realizing the effectiveness of the proposed method, the results are compared with Generalized Regression Neural Network (GRNN) method. The results revealed that the MLFFN method shows a significant improvement over GRNN performance in terms of least error produced.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Yasir Hassan Ali ◽  
Roslan Abd Rahman ◽  
Raja Ishak Raja Hamzah

The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN) computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ). The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.


Author(s):  
Tanujit Chakraborty

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.


Sign in / Sign up

Export Citation Format

Share Document