Content-Addressable Memory Storage by Neural Networks: A General Model and Global Liapunov Method,

1988 ◽  
Author(s):  
Stephen Grossberg
2018 ◽  
Vol 19 (10) ◽  
pp. 3033 ◽  
Author(s):  
James Robertson

The Neuron Doctrine, the cornerstone of research on normal and abnormal brain functions for over a century, has failed to discern the basis of complex cognitive functions. The location and mechanisms of memory storage and recall, consciousness, and learning, remain enigmatic. The purpose of this article is to critically review the Neuron Doctrine in light of empirical data over the past three decades. Similarly, the central role of the synapse and associated neural networks, as well as ancillary hypotheses, such as gamma synchrony and cortical minicolumns, are critically examined. It is concluded that each is fundamentally flawed and that, over the past three decades, the study of non-neuronal cells, particularly astrocytes, has shown that virtually all functions ascribed to neurons are largely the result of direct or indirect actions of glia continuously interacting with neurons and neural networks. Recognition of non-neural cells in higher brain functions is extremely important. The strict adherence of purely neurocentric ideas, deeply ingrained in the great majority of neuroscientists, remains a detriment to understanding normal and abnormal brain functions. By broadening brain information processing beyond neurons, progress in understanding higher level brain functions, as well as neurodegenerative and neurodevelopmental disorders, will progress beyond the impasse that has been evident for decades.


Author(s):  
Bart Mak ◽  
Bülent Düz

Abstract Being able to give real time on-board advice, without depending on extensive sets of measured data, is the ultimate goal of the digital twin concept. Ideally, the models used in a digital twin only rely on current in-service data, although they have been built using simulated and possibly some measured data. Working with just the 6-DOF motions of a ship, can the local sea state reliably be estimated using the digital twin concept? Does a general model exist to do so, without the need to measure or simulate the particular ship? In this paper, we discuss how simulations of an advancing ship, subjected to various sea states, can be used to estimate the relative wave direction from in-service motion measurements of the corresponding ship. Various types of neural networks are used and evaluated with simulated data and measured data. In order to study the generalization power of the neural networks, a range of ships has been simulated, with varying lengths, drafts and geometries. Neural networks have been trained on selections of the ships in this extended training set and evaluated on the remaining ships. Results show that the developed neural networks give a remarkable performance in simulation data. Furthermore, generalization over geometry is very good, opening the door to train a general model for estimating sea state characteristics. Using the same model for in-service measurements does not perform well enough yet and further research is required. The paper will include discussion on possible causes for this performance gap and some promising ideas for future work.


2008 ◽  
Vol 20 (2) ◽  
pp. 573-601 ◽  
Author(s):  
Matthias Ihme ◽  
Alison L. Marsden ◽  
Heinz Pitsch

A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 × 500 grid point discretization of the parameter space.


2013 ◽  
Vol 2013 ◽  
pp. 1-4
Author(s):  
Mau-Hsiang Shih ◽  
Feng-Sheng Tsai

Content-addressable memory (CAM) has been described by collective dynamics of neural networks and computing with attractors (equilibrium states). Studies of such neural network systems are typically based on the aspect of energy minimization. However, when the complexity and the dimension of neural network systems go up, the use of energy functions might have its own limitations to study CAM. Recently, we have proposed the decirculation process in neural network dynamics, suggesting a step toward the reshaping of network structure and the control of neural dynamics without minimizing energy. Armed with the decirculation process, a sort of decirculating maps and its structural properties are built here, dedicated to showing that circulation breaking taking place in the connections among many assemblies of neurons can collaborate harmoniously toward the completion of network structure that generates CAM.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Oscar C González ◽  
Yury Sokolov ◽  
Giri P Krishnan ◽  
Jean Erik Delanois ◽  
Maxim Bazhenov

Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new learning. In the thalamocortical model, training a new memory interfered with previously learned old memories leading to degradation and forgetting of the old memory traces. Simulating sleep after new learning reversed the damage and enhanced old and new memories. We found that when a new memory competed for previously allocated neuronal/synaptic resources, sleep replay changed the synaptic footprint of the old memory to allow overlapping neuronal populations to store multiple memories. Our study predicts that memory storage is dynamic, and sleep enables continual learning by combining consolidation of new memory traces with reconsolidation of old memory traces to minimize interference.


Author(s):  
Oscar Fontenla-Romero ◽  
Bertha Guijarro-Berdiñas ◽  
Beatriz Pérez-Sánchez

Functional networks are a generalization of neural networks, which is achieved by using multiargument and learnable functions, i.e., in these networks the transfer functions associated with neurons are not fixed but learned from data. In addition, there is no need to include parameters to weigh links among neurons since their effect is subsumed by the neural functions. Another distinctive characteristic of these models is that the specification of the initial topology for a functional network could be based on the features of the problem we are facing. Therefore knowledge about the problem can guide the development of a network structure, although on the absence of this knowledge always a general model can be used. In this article we present a review of the field of functional networks, which will be illustrated with practical examples.


2021 ◽  
Vol 11 (4) ◽  
pp. 1865
Author(s):  
Peter Bajcsy ◽  
Nicholas J. Schaub ◽  
Michael Majurski

This paper addresses the problem of designing trojan detectors in neural networks (NNs) using interactive simulations. Trojans in NNs are defined as triggers in inputs that cause misclassification of such inputs into a class (or classes) unintended by the design of a NN-based model. The goal of our work is to understand encodings of a variety of trojan types in fully connected layers of neural networks. Our approach is: (1) to simulate nine types of trojan embeddings into dot patterns; (2) to devise measurements of NN states; and (3) to design trojan detectors in NN-based classification models. The interactive simulations are built on top of TensorFlow Playground with in-memory storage of data and NN coefficients. The simulations provide analytical, visualization, and output operations performed on training datasets and NN architectures. The measurements of a NN include: (a) model inefficiency using modified Kullback–Liebler (KL) divergence from uniformly distributed states; and (b) model sensitivity to variables related to data and NNs. Using the KL divergence measurements at each NN layer and per each predicted class label, a trojan detector is devised to discriminate NN models with or without trojans. To document robustness of such a trojan detector with respect to NN architectures, dataset perturbations, and trojan types, several properties of the KL divergence measurement are presented.


1989 ◽  
Vol 24 (3) ◽  
pp. 562-569 ◽  
Author(s):  
M. Verleysen ◽  
B. Sirletti ◽  
A.M. Vandemeulebroecke ◽  
P.G.A. Jespers

Sign in / Sign up

Export Citation Format

Share Document