scholarly journals Towards Efficient Neuromorphic Hardware: Unsupervised Adaptive Neuron Pruning

Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1059 ◽  
Author(s):  
Wenzhe Guo ◽  
Hasan Erdem Yantır ◽  
Mohammed E. Fouda ◽  
Ahmed M. Eltawil ◽  
Khaled Nabil Salama

To solve real-time challenges, neuromorphic systems generally require deep and complex network structures. Thus, it is crucial to search for effective solutions that can reduce network complexity, improve energy efficiency, and maintain high accuracy. To this end, we propose unsupervised pruning strategies that are focused on pruning neurons while training in spiking neural networks (SNNs) by utilizing network dynamics. The importance of neurons is determined by the fact that neurons that fire more spikes contribute more to network performance. Based on these criteria, we demonstrate that pruning with an adaptive spike count threshold provides a simple and effective approach that can reduce network size significantly and maintain high classification accuracy. The online adaptive pruning shows potential for developing energy-efficient training techniques due to less memory access and less weight-update computation. Furthermore, a parallel digital implementation scheme is proposed to implement spiking neural networks (SNNs) on field programmable gate array (FPGA). Notably, our proposed pruning strategies preserve the dense format of weight matrices, so the implementation architecture remains the same after network compression. The adaptive pruning strategy enables 2.3× reduction in memory size and 2.8× improvement on energy efficiency when 400 neurons are pruned from an 800-neuron network, while the loss of classification accuracy is 1.69%. And the best choice of pruning percentage depends on the trade-off among accuracy, memory, and energy. Therefore, this work offers a promising solution for effective network compression and energy-efficient hardware implementation of neuromorphic systems in real-time applications.

2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


2021 ◽  
Author(s):  
Ceca Kraisnikovic ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware -- neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Alexander D. Pisarev

This article studies the implementation of some well-known principles of information work of biological systems in the input unit of the neuroprocessor, including spike coding of information used in models of neural networks of the latest generation.<br> The development of modern neural network IT gives rise to a number of urgent tasks at the junction of several scientific disciplines. One of them is to create a hardware platform&nbsp;— a neuroprocessor for energy-efficient operation of neural networks. Recently, the development of nanotechnology of the main units of the neuroprocessor relies on combined memristor super-large logical and storage matrices. The matrix topology is built on the principle of maximum integration of programmable links between nodes. This article describes a method for implementing biomorphic neural functionality based on programmable links of a highly integrated 3D logic matrix.<br> This paper focuses on the problem of achieving energy efficiency of the hardware used to model neural networks. The main part analyzes the known facts of the principles of information transfer and processing in biological systems from the point of view of their implementation in the input unit of the neuroprocessor. The author deals with the scheme of an electronic neuron implemented based on elements of a 3D logical matrix. A pulsed method of encoding input information is presented, which most realistically reflects the principle of operation of a sensory biological neural system. The model of an electronic neuron for selecting ranges of technological parameters in a real 3D logic matrix scheme is analyzed. The implementation of disjunctively normal forms is shown, using the logic function in the input unit of a neuroprocessor as an example. The results of modeling fragments of electric circuits with memristors of a 3D logical matrix in programming mode are presented.<br> The author concludes that biomorphic pulse coding of standard digital signals allows achieving a high degree of energy efficiency of the logic elements of the neuroprocessor by reducing the number of valve operations. Energy efficiency makes it possible to overcome the thermal limitation of the scalable technology of three-dimensional layout of elements in memristor crossbars.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3240
Author(s):  
Tehreem Syed ◽  
Vijay Kakani ◽  
Xuenan Cui ◽  
Hakil Kim

In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.


2015 ◽  
Vol 59 (2) ◽  
pp. 1-5 ◽  
Author(s):  
Juncheng Shen ◽  
De Ma ◽  
Zonghua Gu ◽  
Ming Zhang ◽  
Xiaolei Zhu ◽  
...  

Author(s):  
Sylvain Saighi ◽  
Jean Tomas ◽  
Yannick Bornat ◽  
Bilel Belhadj ◽  
Olivia Malot ◽  
...  

2009 ◽  
Author(s):  
B. Belhadj ◽  
J. Tomas ◽  
O. Malot ◽  
Y. Bornat ◽  
G. N'Kaoua ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document