scholarly journals Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks

Author(s):  
Tilahun Getu

Abstract—Inspired by the depth and breadth of developments on the theory of deep learning, we pose these fundamental questions: can we accurately approximate an arbitrary matrix-vector product using deep rectified linear unit (ReLU) feedforward neural networks (FNNs)? If so, can we bound the resulting approximation error? Attempting to answer these questions, we derive error bounds in Lebesgue and Sobolev norms for a matrix-vector product approximation with deep ReLU FNNs. Since a matrix-vector product models several problems in wireless communications and signal processing; network science and graph signal processing; and network neuroscience and brain physics, we discuss various applications that are motivated by an accurate matrix-vector product approximation with deep ReLU FNNs. Toward this end, the derived error bounds offer a theoretical insight and guarantee in the development of algorithms based on deep ReLU FNNs. <br>

2020 ◽  
Author(s):  
Tilahun Getu

Abstract—Inspired by the depth and breadth of developments on the theory of deep learning, we pose these fundamental questions: can we accurately approximate an arbitrary matrix-vector product using deep rectified linear unit (ReLU) feedforward neural networks (FNNs)? If so, can we bound the resulting approximation error? Attempting to answer these questions, we derive error bounds in Lebesgue and Sobolev norms for a matrix-vector product approximation with deep ReLU FNNs. Since a matrix-vector product models several problems in wireless communications and signal processing; network science and graph signal processing; and network neuroscience and brain physics, we discuss various applications that are motivated by an accurate matrix-vector product approximation with deep ReLU FNNs. Toward this end, the derived error bounds offer a theoretical insight and guarantee in the development of algorithms based on deep ReLU FNNs. <br>


Author(s):  
Jörg Bornschein

An FPGA-based coprocessor has been implemented which simulates the dynamics of a large recurrent neural network composed of binary neurons. The design has been used for unsupervised learning of receptive fields. Since the number of neurons to be simulated (>104) exceeds the available FPGA logic capacity for direct implementation, a set of streaming processors has been designed. Given the state- and activity vectors of the neurons at time t and a sparse connectivity matrix, these streaming processors calculate the state- and activity vectors for time t + 1. The operation implemented by the streaming processors can be understood as a generalized form of a sparse matrix vector product (SpMxV). The largest dataset, the sparse connectivity matrix, is stored and processed in a compressed format to better utilize the available memory bandwidth.


2020 ◽  
Vol 76 (11) ◽  
pp. 8883-8900
Author(s):  
Maria Barreda ◽  
Manuel F. Dolz ◽  
M. Asunción Castaño ◽  
Pedro Alonso-Jordá ◽  
Enrique S. Quintana-Ortí

2020 ◽  
Vol 53 (2) ◽  
pp. 1108-1113
Author(s):  
Magnus Malmström ◽  
Isaac Skog ◽  
Daniel Axehill ◽  
Fredrik Gustafsson

Sign in / Sign up

Export Citation Format

Share Document