Simulation and hardware implementation of competitive learning neural networks

Author(s):  
A. Prieto ◽  
P. Martin-Smith ◽  
J. J. Merelo ◽  
F. J. Pelayo ◽  
J. Ortega ◽  
...  
Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2018 ◽  
Vol 14 (4) ◽  
pp. 1-20 ◽  
Author(s):  
Xiaowei Xu ◽  
Qing Lu ◽  
Tianchen Wang ◽  
Yu Hu ◽  
Chen Zhuo ◽  
...  

Author(s):  
RONALD H. SILVERMAN

Neural networks differ from traditional approaches to image processing in terms of their ability to adapt to regularities in image structure and to self-organize so as to implement directed transformations. Biomedical ultrasonic images are often degraded in quality by noise and other factors, making enhancement techniques particularly important. This paper describes use of back propagation and competitive learning for enhancement and segmentation of ultrasonic images of the eye. Of particular interest is the extension or these technique to segmentation of three-dimensional data sets, where simple thresholding and gradient operations are not entirely successful.


Author(s):  
José García-Rodríguez ◽  
Francisco Flórez-Revuelta ◽  
Juan Manuel García-Chamizo

Self-organising neural networks try to preserve the topology of an input space by means of their competitive learning. This capacity has been used, among others, for the representation of objects and their motion. In this work we use a kind of self-organising network, the Growing Neural Gas, to represent deformations in objects along a sequence of images. As a result of an adaptive process the objects are represented by a topology representing graph that constitutes an induced Delaunay triangulation of their shapes. These maps adapt the changes in the objects topology without reset the learning process.


Sign in / Sign up

Export Citation Format

Share Document