Binary Synaptic Connections Based on Memory Switching in a-Si:H for Artificial Neural Networks

1987 ◽  
Vol 95 ◽  
Author(s):  
A. P. Thakoor ◽  
J. L. Lamb ◽  
A. Moopenn ◽  
S. K. Khanna

AbstractNonvolatile, associative electronic memory based on neural network models promises high (−109 bits/cm2 ) information storage density since the information is stored in a matrix of only two-terminal, passive interconnections (synapses). An electronic memory switch at each interconnect would be ideal as a programmable synapse. The massive parallelism in the architecture, however, requires that the ‘ON’ state of a synaptic connection must be unusually ‘weak’ (i.e., highly resistive). For example, a binary synapse should be 106Ω in its ‘ON’ state and >109Ω in the ‘OFF’ state for a 1024 × 1024 matrix (−256 K bits of programmable read only memory, PROM). The small deliverable switching energy dictated by the resistive ‘ON’ state requirement is a new constraint for switching in thin films. Memory switching in hydrogenated amorphous silicon (a-Si:H) along with ballast (current limiting) resistors patterned from resistivity-tailored, amorphous Ge-metal alloys are investigated for a binary PROM matrix. A lμm2 area of a-Si:H could be switched from a −1010Ω (OFF state) to −105Ω (ON state) by a voltage pulse of lpsec duration, with the switching energy of −1 nanojoule, delivered through a 106Ω ballast resistor. Programmable, read-only, 1600 synapse (40x40) test arrays of uniform connection strengths (variation ±2%) and a-Si:H switching elements have been fabricated. Suitability of the memory switching in a-Si:H for high-density neural networks is discussed.

1996 ◽  
Vol 19 (2) ◽  
pp. 285-295 ◽  
Author(s):  
J. J. Wright ◽  
D. T. J. Liley

AbstractThere is some complementarity of models for the origin of the electroencephalogram (EEG) and neural network models for information storage in brainlike systems. From the EEG models of Freeman, of Nunez, and of the authors' group we argue that the wavelike processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear – probably chaotic – dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. The simulations depend on anatomical and physiological estimates of synaptic densities, coupling symmetries, synaptic gain, dendritic time constants, and axonal delays. It is shown that the frequency content, wave velocities, frequency/wavenumber spectra and response to cortical activation of the electrocorticogram (ECoG) can be reproduced by a “lumped” simulation treating small cortical areas as single-function units. The corresponding cellular neural network simulation has properties that include those of attractor neural networks proposed by Amit and by Parisi. Within the simulations at both scales, sharp transitions occur between low and high cell firing rates. These transitions may form a basis for neural interactions across scale. To maintain overall cortical dynamics in the normal low firing-rate range, interactions between the cortex and the subcortical systems are required to prevent runaway global excitation. Thus, the interaction of cortex and subcortex via corticostriatal and related pathways may partly regulate global dynamics by a principle analogous to adiabatic control of artificial neural networks.


MRS Bulletin ◽  
1988 ◽  
Vol 13 (8) ◽  
pp. 30-35 ◽  
Author(s):  
Dana Z. Anderson

From the time of their conception, holography and holograms have evolved as a metaphor for human memory. Holograms can be made so that the information they contain is distributed throughout the holographic medium—destroy part of the hologram and the stored information remains wholly intact, except for a loss of detail. In this property holograms evidently have something in common with human memory, which is to some extent resilient against physical damage to the brain. There is much more to the metaphor than simply that information is stored in a distributed manner.Research in the optics community is now looking to holography, in particular dynamic holography, not only for information storage, but for information processing as well. The ideas are based upon neural network models. Neural networks are models for processing that are inspired by the apparent architecture of the brain. This is a processing paradigm that is new to optics. From within this network paradigm we look to build machines that can store and recall information associatively, play back a chain of recorded events, undergo learning and possibly forgetting, make decisions, adapt to a particular environment, and self-organize to evolve some desirable behavior. We hope that neural network models will give rise to optical machines for memory, speech processing, visual processing, language acquisition, motor control, and so on.


2018 ◽  
Vol 6 (11) ◽  
pp. 216-216 ◽  
Author(s):  
Zhongheng Zhang ◽  
◽  
Marcus W. Beck ◽  
David A. Winkler ◽  
Bin Huang ◽  
...  

Author(s):  
Fathi Ahmed Ali Adam, Mahmoud Mohamed Abdel Aziz Gamal El-Di

The study examined the use of artificial neural network models to predict the exchange rate in Sudan through annual exchange rate data between the US dollar and the Sudanese pound. This study aimed to formulate the models of artificial neural networks in which the exchange rate can be predicted in the coming period. The importance of the study is that it is necessary to use modern models to predict instead of other classical models. The study hypothesized that the models of artificial neural networks have a high ability to predict the exchange rate. Use models of artificial neural networks. The most important results ability of artificial neural networks models to predict the exchange rate accurately, Form MLP (1-1-1) is the best model chosen for that purpose. The study recommended the development of the proposed model for long-term forecasting.


2021 ◽  
Vol 1 (1) ◽  
pp. 19-29
Author(s):  
Zhe Chu ◽  
Mengkai Hu ◽  
Xiangyu Chen

Recently, deep learning has been successfully applied to robotic grasp detection. Based on convolutional neural networks (CNNs), there have been lots of end-to-end detection approaches. But end-to-end approaches have strict requirements for the dataset used for training the neural network models and it’s hard to achieve in practical use. Therefore, we proposed a two-stage approach using particle swarm optimizer (PSO) candidate estimator and CNN to detect the most likely grasp. Our approach achieved an accuracy of 92.8% on the Cornell Grasp Dataset, which leaped into the front ranks of the existing approaches and is able to run at real-time speeds. After a small change of the approach, we can predict multiple grasps per object in the meantime so that an object can be grasped in a variety of ways.


10.14311/1121 ◽  
2009 ◽  
Vol 49 (2) ◽  
Author(s):  
M. Chvalina

This article analyses the existing possibilities for using Standard Statistical Methods and Artificial Intelligence Methods for a short-term forecast and simulation of demand in the field of telecommunications. The most widespread methods are based on Time Series Analysis. Nowadays, approaches based on Artificial Intelligence Methods, including Neural Networks, are booming. Separate approaches will be used in the study of Demand Modelling in Telecommunications, and the results of these models will be compared with actual guaranteed values. Then we will examine the quality of Neural Network models. 


Author(s):  
Ming Zhang

Real world financial data is often discontinuous and non-smooth. Accuracy will be a problem, if we attempt to use neural networks to simulate such functions. Neural network group models can perform this function with more accuracy. Both Polynomial Higher Order Neural Network Group (PHONNG) and Trigonometric polynomial Higher Order Neural Network Group (THONNG) models are studied in this chapter. These PHONNG and THONNG models are open box, convergent models capable of approximating any kind of piecewise continuous function to any degree of accuracy. Moreover, they are capable of handling higher frequency, higher order nonlinear, and discontinuous data. Results obtained using Polynomial Higher Order Neural Network Group and Trigonometric polynomial Higher Order Neural Network Group financial simulators are presented, which confirm that PHONNG and THONNG group models converge without difficulty, and are considerably more accurate (0.7542% - 1.0715%) than neural network models such as using Polynomial Higher Order Neural Network (PHONN) and Trigonometric polynomial Higher Order Neural Network (THONN) models.


Author(s):  
Joarder Kamruzzaman ◽  
Ruhul Sarker

The primary aim of this chapter is to present an overview of the artificial neural network basics and operation, architectures, and the major algorithms used for training the neural network models. As can be seen in subsequent chapters, neural networks have made many useful contributions to solve theoretical and practical problems in finance and manufacturing areas. The secondary aim here is therefore to provide a brief review of artificial neural network applications in finance and manufacturing areas.


This chapter develops two new nonlinear artificial higher order neural network models. They are sine and sine higher order neural networks (SIN-HONN) and cosine and cosine higher order neural networks (COS-HONN). Financial data prediction using SIN-HONN and COS-HONN models are tested. Results show that SIN-HONN and COS-HONN models are good models for some sine feature only or cosine feature only financial data simulation and prediction compared with polynomial higher order neural network (PHONN) and trigonometric higher order neural network (THONN) models.


Sign in / Sign up

Export Citation Format

Share Document